Alexa, how is Amazon doing in AI?
Supersmart algorithms won’t take all the jobs, but they are learning faster than ever, doing everything from medical diagnostics to serving up ads.
We’ve been promised a revolution in how and why nearly everything happens. But the limits of modern artificial intelligence are closer than we think.
Most comparisons between human drivers and automated vehicles have been at best uneven, and at worst, unfair
This week, the school announced the launch of the MIT Intelligence Quest, an initiative aimed at leveraging its AI research into something it believes could be game-changing for the category. The school has divided its plan into two distinct categories: “The Core” and “The Bridge.”
Smaller algorithms that don’t need mountains of data to train are coming.
Something I often hear in the machine learning community and media articles is “Worries about superintelligence are a distraction from the real problem X that we are facing today with AI” (where X = algorithmic bias, technological unemployment, interpretability, data privacy, etc). This competitive attitude gives the impression that immediate and longer-term safety concerns are in conflict. But is there actually a tradeoff between them?
Amazon and two other American titans are trying to shake up health care by experimenting with their own employees’ coverage. By Chinese standards, they’re behind the curve. Technology companies like Alibaba and Tencent have made health care a priority for years, and are using China as their laboratory. After testing online medical advice and drug tracking systems, they are now focused on a more advanced tool: artificial intelligence.
The program uses state-of-the-art AI techniques, but simple tests show that it’s a long way from real understanding.
DON’T MAKE AI ARTIFICIALLY STUPID IN THE NAME OF TRANSPARENCY
A small group of colleagues and I have started a massive empirical effort to catalogue mental models that are ambient in our field, to formalize them, and to then validate them with experiments. It’s a lot of work. I think it’s the first step to developing a layered mental model of deep learning that you could teach in high schools.
In this tutorial, you will learn how to train and test an end-to-end deep learning model for autonomous driving using data collected from the AirSim simulation environment. You will train a model to learn how to steer a car through a portion of the Mountain/Landscape map in AirSim using a single front facing webcam for visual input. Such a task is usually considered the “hello world” of autonomous driving, but after finishing this tutorial you will have enough background to start exploring new ideas on your own. Through the length of this tutorial, you will also learn some practical aspects and nuances of working with end-to-end deep learning methods.
This paper is an attempt to explain all the matrix calculus you need in order to understand the training of deep neural networks.
Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world’s top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know.
Source: Deep Learning on Medium