How to ride the third wave of AI

Original article can be found here (source): Artificial Intelligence on Medium

How to ride the third wave of AI

We are at a very exciting juncture in the development of artificial intelligence (AI). We are starting to see implementations of the third wave of the technology, which involves machines far surpassing human capabilities in various application domains, creating all kinds of opportunities for businesses. To leverage this to its full potential, companies need to rethink how they operate and put AI at the heart of everything they do. The first AI wave started with statistics-based systems. The best-known use is likely the information retrieval algorithms used by big internet companies like Google in the early years of AI, such as PageRank search engine. The second wave was about many more machine learning techniques, like logistic regressions, supporting vector machines, and so on. This is still used in all kinds of businesses like banking and digital marketing tools. The third wave is deep learning. This manifests in so-called perception AI, relating to our human perception system including sight, hearing, touch and so on. We see this when we encounter speech recognition and image recognition. It’s used in smart speakers to recognise what you say; in email programs that predict what you want to write next; in mobile phones that are unlocked by facial recognition; in digital marketing and advertising tools that predict customer behaviour; and many other use cases. The third wave has emerged in the last five or so years and has far surpassed human capabilities in these areas. In terms of applying this technology to products in the real world, we are at different stages depending on the application. For example, smart speakers are very good at deciphering speech in perfect conditions such as speaking loudly directly into the microphone, but less so in real-world use (if other people are talking in the same room, for example). Similarly with facial recognition, your mobile phone will recognise you when you look directly at it, but surveillance cameras in public spaces are less accurate when faced with big crowds of people when some faces are partially obscured. Object recognition is the same. Vehicles are now pretty good at recognising other vehicles and pedestrians as part of their advanced driving assistance. However, how effective it is depends on weather conditions. If it’s raining, dark or too sunny, accuracy can be affected. Objects in our homes (cups, TV remotes, chairs, etc.) are even harder to recognise. That’s why we don’t have robots helping us around the house. At least not yet! The way you improve a deep learning system is with data. The more high-quality data you feed it, the better the system will perform. It’s simple: more data, better performance. However, the data must be as high-quality as possible. The way to achieve this is by making your training data as similar as possible to real-world use cases. The best way to get data is to get your product into your customers’ hands and — with their consent — start collecting data from their usage in their day-to-day lives. Then you will get training data in the exact environment where people are using your product. Tesla is a great example. Because it has a sizable and devoted user base using its electric cars, it can collect masses of data which it then uses to retrain its model using deep learning. It then uses this information to continually send out OTA (over-the-air) updates to the software in its cars.

Posted on 7wData.be