The Unfortunate Power of Deep Learning

Original article was published by Hein de Haan on Artificial Intelligence on Medium

The Unfortunate Power of Deep Learning

As our systems become more and more capable, we need to be able to know they will make the right decisions

Nope, I’m not posting an image of the Terminator. Photo by Markus Spiske on Unsplash

In May 1997, Deep Blue, a Chess computer developed by IBM as the next stage of Carnegie Mellon University’s Deep Thought project, defeated then reigning World Chess champion Garry Kasparov with a score of 3.5–2.5. Deep Blue used a classic, explicitly programmed search procedure (minimax) to play Chess: it based the strength of each move on the best possible counter-move.

Twenty years later, Google Deepmind’s AlphaGo Master, a Deep Learning system, beat Ke Jie, the then number 1 Go player in the World, in a three-game Go match. Even though the World was shocked, AlphaGo Master would get successors which played (a lot) better, the latest one being MuZero. Interestingly, the systems built by DeepMind use Deep Learning, a technique which enables the system to learn from experience. Using this principle, MuZero reached extreme superhuman levels of Go performance by playing against itself. What’s more, it can learn Chess and Shogi, too.

Artificial intelligence Jobs

I picked these two examples of historic AI moments because they illustrate a larger trend: the Deep Learning Revolution, where Deep Learning has been used in more and more products over the years. The results are often amazing: just think of driverless cars, which in the future might drive so well, that accident rates will drop dramatically.

We don’t know exactly what’s going on inside a Deep Learning system.

So why the negative title? I’m concerned because humanity is building systems that are more and more intelligent, often using Deep Learning, and we simply don’t know exactly what’s going on inside these systems. They are mostly a black box, and even their designers can’t tell why a system decided something. Where Deep Blue was a completely interpretable system: its thought process was explicitly designed by its programmers, so the programmers could always, in theory, know what board position led to which move played by Deep Blue. MuZero, however, is more of an intuitive system, one that that doesn’t use explicitly defined rules to reach a decision. A Deep Learning system that can recognize faces might be extremely good at recognizing your face, but you couldn’t tell exactly what it is that made the system decide it is indeed seeing your face.

MuZero is an intuitive system.

There are techniques that make Deep Learning more interpretable, like Saliency Maps. In the case of face recognition, Saliency Maps tell the engineer to what parts of the image the Deep Learning algorithm is paying attention; how the algorithm uses the information is, however, still unclear.

Trending AI Articles:

1. Microsoft Azure Machine Learning x Udacity — Lesson 4 Notes

2. Fundamentals of AI, ML and Deep Learning for Product Managers

3. Roadmap to Data Science

4. Work on Artificial Intelligence Projects

Better algorithms for this might be invented in the future, but one thing will remain true: making Deep Learning systems more interpretable requires extra effort after the model is built, giving rise to at least two problems. First of all, the engineers of the system have to want and be able to put in the extra effort. Secondly, the algorithm built to make the black box less “black” is something that can fail itself, so it would be much better if the original system is built to be explainable from the start.

Humanity’s current focus on building opaque AI systems is worrisome.

Artificial Superintelligence and Friendly AI

To understand why interpretability in AI worries me, let’s briefly discuss two theoretical concepts: Artificial Superintelligence (ASI) and Friendly AI. An Artificial Superintelligence is, as the name suggests, an Artificial Intelligence that is (much) more intelligent than even Leonardo da Vinci, Albert Einstein or whoever your favorite genius is. If you consider the fact that humanity is dominating Earth because of its intelligence being superior to that of all other animals, then the introduction of a new “species” that’s way more intelligent than we are should alarm you. This introduction of ASI might very well happen this century, and its impact will, because of the ASI’s superior intelligence, be huge. This impact could easily be negative (as negative as human extinction), but it could also be very positive (e.g. rapidly curing diseases).

Unless we somehow find a way to make Deep Learning systems completely transparent, our current Deep Learning trend doesn’t give much hope for a provably Friendly AI.

An ASI that has a positive impact on humanity is called a Friendly AI, a term coined by Eliezer Yudkowsky. Since an ASI can probably not be controlled by humans — because of its superior intelligence — and since its impact will be so big, the designers of an ASI need to know up front whether the ASI will be friendly or not. They have to be able to demonstrate, on paper, that the system will be friendly. Unless we somehow find a way to make every Deep Learning system completely interpretable, our current Deep Learning trend doesn’t give much hope for a provably Friendly AI. I’m not claiming Deep Learning will be part of the first ASI (although it might very well be); it’s the current hype around building Deep Learning systems instead of (completely) interpretable systems that worries me.

Don’t forget to give us your 👏 !