Source: Deep Learning on Medium
Challenges of (near)future AI
Speculating about the future of Artificial Intelligence (AI) is as difficult as doing so about any other type of technological development. For example, logistics (self-steering cars), the financial system (algorithmic trading), and health technology (wearable devices) all have a future that is to some extent uncertain and chaotic.
But we are not totally blind, speculation about the future of AI can be meaningful if done carefully. In particular, it seems that if we consider the technological and societal changes that have driven the current AI era we might be able to say a few things about the shape of things to come.
Classic drivers of the Deep Learning Era
The classic drivers of the Deep Learning (DL) era are often said to be the availability of big data, affordable GPU chips for training, and a host of new DL algorithms invented by both academia and IT firms over the last decade. To these we can add a fourth: our increased ability to annotate raw data with machine readable human interpretations.
This increased ability, however, is also an increased necessity. Typically, only data that has been carefully annotated and provided in its meta-data with some type of human interpretation can be used successfully for DL applications. Some examples. Do you want to automatically detect an important blip in your sensor data? You better collect enough of said blips, and save them with meta-data that labels them as such. Oh, and counter examples too! Or do you want to automatically send streams of incoming documents to the right people and departments in your organisation? Same requirement; you will first need to collect enough correctly classified documents to teach the DL algorithms by example. This is the crux: DL does not learn from your data alone, it needs some type of machine readable human classifications before it can make similar classifications on its own.
How much data is enough?
The good news about today’s AI is that this is possible at all, even if the raw data is very complex. The bad news is that guarantees are more problematic. How much data is enough? Is there an upper bound to the number of errors made? How robust will the application be if the data starts to drift? (which it always does) All such questions have the same answer, as simple as it is frustrating: there’s not math to help you here, you’ll need to experiment to find out.
The necessity of enough labeled data is one of the bottlenecks of today’s DL applications. It is also a meaningful lens through which to view the future. If future AI will stay dependent on large amounts of labeled data, and if there is no breakthrough that allows efficient generalisation from one domain to another, then we are looking at an application landscape that is only conquered step by step, one domain at a time, one task at a time.
This may be easier in some areas than in others. Human beings make decisions driving cars every day, in vast amounts. Given the right recording procedures, these decisions can become the required meta-data for the raw camera data to learn from. This behaviour, though complex, is well defined. It seems reasonable to assume there will eventually be enough data to learn from to successfully drive cars without human intervention. On the other hand, steering an organisation, or even a nation, is more complex. The problem lies not only in the complexity of the task, but in the difficulty of collecting sufficient amounts of example data. Indeed, how would one go about collecting human annotated examples of sound decision making in politics or diplomacy?
Approaching Deep Learning
Every task on the complexity spectrum between steering a car and steering an organisation faces the same data collection challenge. There is more. Let’s consider image classification as an example. While today’s DL methods have become quite good at predicting which label belongs to a new image, they are severely limited in telling us why. Reasoning about what is in the image is typically not part of the classification mechanism itself. Explainability is an often quoted conundrum in AI, but it is about more than just computers telling us why they do what they do. Reasoning about causes and effects is essential if we want to see AI take the next step in cognitive development. Causal reasoning is essential if we want AI to reason about intervening in ongoing processes. Should the patient take drug A or B tomorrow? Should the company adopt policy A or B next year? Questions about intervention are fundamentally out of reach of standard DL approaches.
Interestingly, the foundational mathematics to provide DL with causal reasoning abilities already exists. The bottleneck is again us providing a particular DL application with sound causal assumptions. And there is a similar catch: the data alone is not enough to derive such causal assumptions. Human judgment is needed to provide DL with causal assumptions, for example in the form of a causal graph that encodes the direction of causality among the observed variables. Human judgment is needed, for now.
Challenges of (near) future AI
These then, are two big challenges of (near-)future AI: the collection of annotated data sets, and the challenge of providing AI with sound causal assumptions about the data and the world. Now, let’s assume we overcome these challenges, where will we end up? We now enter pure speculation, but let’s fantasize for a bit.
AI in wearable medical devices, smartphones, or even clothing itself, could drop the price of medical research to the extent that the development of a new treatment or medicine no longer costs billions. By the same principle of data gathering and sharing, retail and tourism could be transformed into much more personalized business areas, offering you fashion and travel destinations that match your tastes better, and with less effort for you.Traffic congestion could be optimised away by coordinating the entire fleet of vehicles moving as one. Mobility could be transformed, not only by self-driving cars steering you out of harm’s way, but by better planning, making sure you are where you really should be to achieve more in life with less energy spent.
But the true potential lies in the discovery of new scientific knowledge, new materials, new treatments, new policies, new ways of transforming our lives. Science has long been a group effort. New discoveries are not made by a singular genius, but by prolonged team efforts over the span of years, sometimes decades. Since the conception of AI we have had a tendency to view intelligence as the intelligence of the individual, and hence of AI as the intelligence of an individual robot or agent. Perhaps in future AI design we should cease to focus on the intelligence of the individual and recognize the true potential of automating scientific creativity. We can still rebrand AI as Artificial Science, if we want.