Artificial General Intelligence is a long way away

Original article was published on Artificial Intelligence on Medium

The holy grail of machine learning and AI research is “Artificial General Intelligence” (AGI) — a machine that has the capacity to learn or understand any task that a human can. In the last few years, advances in AI have led to several apocalyptic warnings, including from the likes of Elon Musk, warning that AI is a “fundamental existential risk for human civilisation”.

I recently read a fantastic book called “Radical Uncertainty — decision making for an unknowable future”, by Mervyn King and John Kay. The book is a brilliant discussion about two contrasting ways of decision making in an uncertain world, one relying on human judgement and the other leaning heavily on mathematical modelling. The authors make a powerful case for the former, arguing that over-reliance on the later leads to poor outcomes.

By focusing on the art of decision-making, and using many ideas from this book, I hope to convince you that AGI is long away.

Different sorts of reasoning

Photo by Tachina Lee on Unsplash

If AGI is to become a reality, AI will at least have to form an appreciation of how humans think about the world. Humans can reason in the following ways:

  1. Deductive Reasoning — such as “I live in London. London is in the United Kingdom. Therefore I live in the United Kingdom”.
  2. Inductive Reasoning — such as “Analysis of past election results indicate that voters favour the incumbent party in favourable economic circumstances. In the 2016 US Presidential election, their economic conditions were neither favourable or unfavourable. Therefore the election will be close”. This reasoning uses events that have happened in the past to infer likely future outcomes.
  3. Abductive Reasoning — such as “Donald Trump won the 2016 presidential election because of concerns in particular swing states over economic conditions and identity, and because his opponent was widely disliked”. This reasoning provides the best explanation for a unique event. Humans are great at this because we are great at filtering disparate evidence in search of the best explanation.

AI can reason in the following ways:

  1. Deductive Reasoning — traditional software is very good at this. Indeed deductive reasoning is the basis of all computer code. For example, if ‘a’ equals ‘b’ and ‘b’ equals ‘c’, then logically ‘a’ equals ‘c’.
  2. Inductive reasoning — machine learning uses this reasoning by using past data to make inferences about the future. Perhaps it is no coincidence that Andrej Karpathy has called this ‘Software 2.0’.
  3. Abductive Reasoning — if computers could reason in this way, this would surely be true AGI (Software 3.0)? Computers are very bad at this because it is often not obvious which data to use and the data available might well be incomplete.

So when do we use each of these types of reasoning? Deductive Reasoning is typically used for very well defined problems e.g. mathematical puzzles. However, the less well defined the problem is, the more inductive and abductive reasoning are required. ‘Why did Trump win the 2016 Presidential Election’, is not a question that can be meaningfully answered using deductive reasoning.

Small worlds vs big worlds

The authors refer to problems which can be completely defined as ‘small world’ problems. Take for example the game of chess. Nothing can happen outside the carefully defined rules of the game. Your opponent cannot suddenly start using his bishop as a knight, taking you by surprise and claiming victory.

Photo by The New York Public Library on Unsplash

These ‘small world’ problems are ideally suited to being solved by AI as they use deductive or inductive reasoning. The AI can look over millions of examples of previous games, safe in the knowledge that the same basic rules have been applied to each game (i.e the problem is ‘stationary’). As a result, AI researchers have used games as an ideal testing ground for their algorithms.

Over the last decade, these algorithms have become hugely more sophisticated. In 2016, an algorithm developed by Google DeepMind, AlphaGo, stunned the world by beating the world champion Lee Sedol at the ancient Chinese game of Go. Go is considered the hardest of all board games for algorithms to play given the large number of possible moves that can be played on any turn (known as the ‘branching factor’). Following the ‘Go’ victory, research has focused on ever more complex environments. Starcraft II, one of the most popular video strategy games, was picked as the next challenge by DeepMind because “players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales.” In 2019, they announced that their algorithm, AlphaStar, had achieved Grandmaster level in the game.

However, just because an algorithm performs well on a difficult task, does not mean that we would describe it as ‘intelligent’. The researchers at DeepMind note that they see an intelligent algorithm as one which can perform “sufficiently well on a sufficiently wide range of tasks”. To that end, in early 2020, they came up with Agent57, an algorithm which can perform at human level and above on all of the 57 classic Atari games.

This is all incredibly impressive — very few people predicted this level of progress even 5 years ago (indeed, in 2014, Wired magazine states that Go was so complex that an algorithm would never beat an expert human). But does this mean that we are close to AGI? Unfortunately not. All these environments are still ‘small worlds’ — they have a defined set of rules which do not change over time. Moreover, these algorithms employ a trial and error approach — replaying the game literally millions of times, a luxury not afforded in the real world.

Take for example a ‘big world’ problem. In 2011, President Obama was faced with a decision — should he authorise a raid on the Abbottabad compound in Pakistan, where Osama bin Laden was believed to be hiding? Some of his advisers were fairly sure that bin Laden was in the compound, others less so. This was certainly a unique decision, one with limited historical precedent. There were perhaps moral and ethical considerations. And that is before any number of risks were considered — what if the equipment for the raid failed? What if the Pakistani military preempted the attack and fought back? The ‘bigger’ the world, the more inductive and, in particular, abductive reasoning are needed.

AGI is currently a long way off

Even with the recent advances in AI, it is hard to see how an algorithm today could even start to think about a decision such as whether to authorise an attack on the Abbottabad compound. Why is this?

While there were certain historical precedents (in 1979, President Carter tried to rescue US hostages from the Tehran embassy — a mission that was a fiasco), this was a unique event and a unique decision. So this required abductive reasoning — there was no database of very similar situations which had either failed or succeeded, which an algorithm could learn from. This highlights the first hurdle on the road to AGI — data efficiency. All the AI examples above have required millions of examples for the algorithm to iterate towards an optimal state. But many decisions in life are the opposite, made off the basis of limited historical data.

Photo by Markus Spiske on Unsplash

Even if, though, there was a database of similar historical incidents, to what extent would this actually help? Very little would be my guess. Whereas in games such as chess and Go (small worlds), the number of actions are limited and the consequences of actions are defined by the rules of the game. In the real world (big worlds), even a small action/decision can have potentially huge and unpredictable consequences (some examples here). This is the second hurdle on the road to AGI — non-stationarity. A key assumption in the AI use cases above is that past experience is a good guide for the future. But in the real world, this is not necessarily the case.

Lastly, current AI algorithms are built around ‘optimising’ the desired criteria. But did Obama make the ‘optimal’ decision? I would argue that this question isn’t meaningful. Sure, the raid succeeded so in hindsight, we might say that it was a good decision. But what if the risks had been much higher than appreciated at the time and Obama had been incredibly lucky? In most ‘big world’ problems, we ‘satisfice’ rather than optimise as we don’t have complete information about the problem. Developing algorithms that ‘satisfice’ in situations with incomplete data will require a paradigm shift in how such algorithms are developed.

This doesn’t mean that developments in AI won’t have a large effect

If you are worried about the development in AI, the above might be reassuring. But this doesn’t mean that AI will not have a transformative effect on society. AI will continue to improve for bigger and bigger ‘small worlds’.

Take, for example, PolyAI, a company which provides state-of-the-art AI-powered customer service by voice. This is very advanced AI but it still tackles a ‘small world’ problem. Previous customer service interactions are very useful in training the algorithm and the structure of customer service queries do not change hugely over time. It is certainly a bigger world than, say, chess — the rules are less well defined and the problem is more unstructured. The algorithm still has to handover to a human for particularly difficult cases.

However, when this technology is widely adopted, it could nonetheless have a huge economic impact. 4% of the workforce in the United Kingdom is currently employed in call centres, jobs which could come increasingly under threat. This is one of a number of industries that could change hugely with advances in AI. Furthermore, it is more than possible that AI algorithms will be trained in ‘small worlds’ but are used erroneously in ‘big worlds’ with detrimental effects (see my previous blog post for more details). My prediction is that AI will increasingly be deployed in increasingly larger ‘small worlds’ and over time have a large economic and societal effect.

So in conclusion…

So should we be worrying about AI posing a fundamental risk to human civilisation? I argue not. A whole different paradigm shift is required to develop AI that is truly intelligent. However, this does not mean that AI will not have a profound effect on how we live. We should start planning for this sooner rather than later.