Source: Deep Learning on Medium
Alan Turing created the world-famous Turing test in 1950, which for a computer to pass, it has to be able to convince a human that it is a human and not a computer. The first artificial neural network came in 1958, whereas back-propagation came around in 1986. Since then Artificial Intelligence (AI) has evolved very rapidly. It also took various names like knowledge-based systems’, ‘intelligent systems’, ‘cognitive systems’, ‘analytics’, etc. Although the AI community witnessed the so-called ‘AI winters’ after the 1980s and 1990s mostly due to the incapability of high computation or dried VC funding and added to it the government’s pessimism about the technology, tech giants and researchers since the beginning of 21st century finally started to realize the immense potential of ‘Machine Learning’ (ML), a subset of AI.
Trends in industries using ML
As far as the applications of ML in industries is concerned, healthcare remains the biggest industry. According to a report about by NVIDIA an analysis of GPU users suggests that higher education is the largest consumer of GPU usage followed by healthcare, the Internet, advertising & entertainment, financial services, manufacturing & supply chain, e-commerce, security & surveillance, transportation, etc. to name a few.
Autonomous Systems — This been a hot topic of discussion for the past few years. Industries are moving towards more and more sensors and the availability of this sensor data has made IoT capable of solving the problems of a large number of aspects for an industrial system. From robot-driven assembly lines, intelligent predictive maintenance system for machines to intelligent agent-based supply-chain decision making, AI can have a large impact when paired with IoT.
Trends in ML Research
In ML research, domains of Deep Reinforcement Learning (Deep RL), Natural Language Understanding and Densed Representation learning like General Adversarial Networks (GANs) and Capsule Nets are leading in publications.
GANs are used for generating realistic looking samples which it has never looked before given statistical distribution. Like converting sketches into images, converting low-resolution images to high-resolution ones, generating art, etc. Reinforcement Learning has been recently popular after it successfully defeated semi-professional teams in Dota 2. Recently researchers have published interesting applications of Deep RL especially after 2018, in areas which involve solving control optimization problems like those in Supply Chain.
CNNs have been the de jure when it comes to image recognition tasks. But according to Geoffrey Hilton CNNs have some shortcomings, like they fail to identify when a rotated or tilted image is given. He has tried to address these shortcomings in Capsule Nets. In the coming days, we might see a shift from convolutional nets to capsule nets.
Natural Language Understanding — 2018 was technically a transfer learning year for NLP as most of the notable works were already published by 2017. However, NLP applications like voice assistants and chatbots have picked up interests in the business as well as consumer segment. Integration of chatbots on newer platforms will become more common than before. Also, Amazon recently shared its vision of integrating ‘Alexa’ with businesses, where it can be made to attend meetings and assist members with minutes and reminders. It would be amazing to see such applications become profound in the coming time.
As ML innovates the offerings of businesses, specialized tools, frameworks, and infrastructure will be needed to support custom use cases. Some trends in technologies needed to implement ML for business requirements-
Inference on the Edge
With the current design of hardware, running an AI model on mobile devices is difficult and training them is even more so given the available computation. Companies like Qualcomm, Intel, NVIDIA etc. have come up with AI enabled chips which can be embedded into mobile devices for inference on the edge. Devices like autonomous systems like drones or AGVs or even Android devices can support AI models with these chips.
Most of the traditional training of machine learning algorithms is centralized, meaning that they are trained on a remote cloud server. However, the main drawback of this is the difficulty of rolling updates for corresponding AI software as well as simultaneously carrying out the training with the real-time generated user data. Decentralized AI overcomes this drawback by crowdsourcing the training to multiple devices at once. This greatly reduces latency for inference and increases throughput as compared to centralized AI. One example of using decentralized AI by Google is of the GBoard, where it uses the Federated Learning Approach where it crowdsources ML training to millions of mobile users by making AI models directly available on devices.
Decentralized AI will gain greater momentum when ‘Edge Computing’ becomes normal, in use cases like drones or autonomous vehicles where deep learning algorithms are constantly involved in real-time decision-making and connecting to cloud with a network cannot be relied upon.
The initial stage of any ML project consists of research, building a pipeline, experimentation, and prototyping. Only when they’re complete the production of the model for actual end users can be done. Given the constant changing scenarios in the field ML platform is of vital importance for any organization. Platforms make the process of deployment of ML models for training as well as inference for the end user much easier.
Uber has developed its own internal platform called ‘Michelangelo’ for ML. Netflix also shared its insights about its internal ML platform. It is very important to adapt to a platform because it is much easier to access those when it comes to scaling up.
Although it is difficult to quantify AI, there are some profound factors which show a direct indication of the ML hype. The number of online courses that are available related to ML has increased significantly. Nvidia reports having sold GPUs to more than 100 times the number of companies in 2018 than in 2013. Top tech giants are investing heavily in talent, R&D and making AI accessible to developers. The open-source tools and frameworks available have been doubled in a year. More than $108 billion have been poured into various Machine Learning related startups in the last 5 years. Coming years are about to be dominated by machine learning trends creating a disrupting impact.
Author: Ashutosh Ikade