Relax, AI Will Not Take over the World

Source: Deep Learning on Medium

Relax, AI Will Not Take over the World

At least not anytime soon.

Photo by Franck V. on Unsplash

‘Real’ artificial intelligence is not yet here. AI systems lack the innate knowledge. You can stop glancing sideways at your technological devices.

Ever since AI technology came out, it has been feared. In popular media, books and films, there are countless stories depicting Artificial Intelligence taking over the world, and countless mentions of the possibility in everyday conversation. But if something like this were to truly occur, it would occur far, far into the future.

Are the potential dangers of artificial intelligent beings being over-hyped again? We can perhaps notice a pattern. Similar to the stock market, AI popularity goes up and is overvalued, only to by followed by a downturn shortly after. Some call it AI winter.

But machine learning and AI guru Andrew Ng phrased it well when he said:

“Fearing a rise of killer robots is like worrying about overpopulation on Mars”.

What caused the spike in AI’s popularity this time?

AI experienced its first commercial success. No doubt it had something to do with the value deep learning AI has started providing to companies. The theory of deep learning is not new. It has been known and tried since the year 1985. But back then computers lacked the processing power. Many believe we only need to increase computer power and real AI (artificial general intelligence) will emerge, creating a reality for the existence of machines that “have the capacity to understand or learn any intellectual task that a human being can”. But is this really the case? We are not even close to ‘real’ AI. We have known for a while now that a computer is faster than a human at doing specific calculations, such as determining the square root of a number. But so what? If we have self driving cars, it does not mean that the car will be intelligent in any way.

What is missing from our current understanding of AI?

Gary Marcus puts it beautifully in Lex Fridman’s AI Podcast: Toward a Hybrid of Deep Learning and Symbolic AI. We not only need deep learning, we also need “deep understanding”. We would need to create a computer that has an innate capacity to understand the world, the way a baby’s brain has the innate capacity to build connections to understanding the world from the very start of its life. Without this innate capacity for understanding, the child would not be able to learn, and without the ability to learn, the innate understanding would be worthless. It’s the age-old debate of nature vs. nurture, and we of course need both, we need nature AND nurture.

Unfortunately (or fortunately — depending on how you view it), the ‘understanding’ side of the field is a bit lacking. By bit, I mean there has been no progress at all. Other than those in popular media, there are no existing AI systems out there that have anything close to this level of understanding. All of them simply mimic intelligence. That won’t work for building artificial general intelligence, according Gary Marcus. And I think he’s right.

What other avenues of AI can we explore?

Taking Tesla to new heights would be an interesting thing to explore. Would Tesla be able to pull off self-driving using deep learning alone, or should Tesla look into programming “innate knowledge” about the world into their cars in order to succeed? They may do it if they have to.

Another interesting possibility would be to use AI to create simulated evolution. We could create life-forms that live only in a computer. Let them compete for resources and reproduce as we do in a simulated environment. Add a random mutation when they reproduce. Wait a million, or billion cycles and see the outcome. We need only time and intelligence should emerge eventually. Is that right?

We still have many avenues to explore before we come close to having real intelligent machines.

Many believe we may be close — or at least somewhat close — to technological singularity: the exponential progress of AI, where machines get exponentially smarter and eventually human existence won’t matter any more. Where is the evidence of us being close to that?

According to Francois Chollet in Lex Fridman’s AI Podcast: Keras, Deep Learning, and the Progress of AI, the opposite is true. As we progress to accomplish more, it becomes exponentially more difficult to advance. You need more and more people. Although the number of people working in the field and are creating research papers are increasing exponentially, the progress is liner at best or stagnant.

An intelligent machine cannot exist on its own. Our brain does not exist on its own. It exists in connection with the rest of the body, and its intelligence is realized while interacting with its environment. We need to take a look at the whole ecosystem. We have lots of externalized knowledge. The Internet is an externalized knowledge. The libraries, schools are all part of our Intelligent ecosystem. The days when one person was able to make breakthrough discoveries on their own are now gone. A group of interconnected individuals working together is how we make meaningful progress.

So relax. AI is far from reaching its true power. The human brain is still needed in the web of interconnected space and time. Your single brain won’t be making any breakthrough discoveries by itself anytime soon, but within the web, it is a powerful addition. An important one. And it’s not going anywhere anytime soon, especially not at the expense of AI.