Source: Deep Learning on Medium
Here are the highlights from an eventful week — Germany plans 3 billion euros in AI investment; How to Teach Artificial Intelligence Some Common Sense; Google open sources BigGAN generators; ImageNet/ResNet-50 Training in 224 Seconds
German govt wants to promote the use of AI applications in business within a framework that protects fundamental social values and individual rights and are planning to invest 3 billion euros!
The German government has set aside around 3 billion euros for research and development of artificial intelligence, as…www.reuters.com
Just an year old, Standard Cognition competes with Amazon Go in making the shopping experience seamless. Unlike Amazon Go, they deploy overhead cameras that identify you by shape and movement, not facial recognition.
This WIRED article explores the current limits, and current power of deep learning and the challenges of making AI truly able to reason.
Five years ago, the coders at DeepMind, a London-based artificial intelligence company, watched excitedly as an AI…www.wired.com
A study by researchers at Georgia Institute of Technology shows creating systems that can perform more mundane tasks — such as dressing themselves — is proving to be an enormous challenge as well.
Tutorials, Tools and Tips
This is a quick checklist of important things one needs to keep in mind while pushing your scikit-learn models into production.
These are just heads-ups, specifically for scikit-learn, not a full workflow you can follow; The problem of how to…queirozf.com
Here’s a list of papers in Deep Reinforcement Learning curated by the folks at OpenAI. It’s a great resource for someone looking to get started and it’s a lot of reading!
What follows is a list of papers in deep RL that are worth reading. This is far from comprehensive, but should provide…spinningup.openai.com
DeepMind has opensourced the BigGAN generators on TFHub. Dig in to explore the world of most impressive GAN samples generated yet.
This research work has generated a lot of conversation among the researchers. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet).
Abstract: Gradient descent finds a global minimum in training deep neural networks despite the objective function being…arxiv.org
By applying a two techniques – batch size control and 2D-Torus all-reduce this paper claims to have successfully trained ImageNet/ResNet-50 in 224 seconds without significant accuracy loss on ABCI cluster!
If you like what you are reading, please follow and recommend to your friends or give a shoutout on Twitter! I’m glad to hear your suggestions and recommendations @deephunt_in or in comments below!