Achieving Fake Explanations in AI

Source: Deep Learning on Medium


Photo by rawpixel on Unsplash

Cassie Kozyrkov has just written a good take on why “Explainable AI won’t deliver”. Her take is the best survey of ideas I have seen on the unlikelihood of delivering explainable AI. In the beginning of this year (2018), one of my predictions for Deep Learning was the following:

Explainability is unachievable — we will just have to fake it

I wrote that this was an unsolvable problem and that what will happen will be machines will become very good at “faking explanations.” The objective of these explainable machines is to understand the kinds of explanations a human will be comfortable with or can understand at an intuitive level. However, a complete true explanation will be inaccessible to humans in the majority of cases.

Thus AI explainability is a human computer interface (HCI) problem. I am now thrilled that Kozykov has written something that explains these ideas in much greater detail. The primary motivation for the need for explainability revolves around the question of trust. Can we trust the decision of a cognitive machine if it cannot explain how it arrived at the explanation. This was in fact the heart of a discussion I had previously on Human compatible AI.

Korzykov’s recommends explaining complex behavior through the use of examples. It is the AI’s responsibility to provide examples to explain behavior. This recommendation is in fact an example of what I had an intuitive level explanation. We must understand what it means to have an intuitive explanation and then design our AI to deliver these intuitive explanations.

How do we trust complex machinery like airplanes? The people who study aerodynamics and the wings that provide lift to planes will tell us that the mathematics are intractable and that the shapes of the foils are discovered by happenstance. Yet, we all comfortably get on planes every day with the confidence that we’ll make it alive to our destination. We trust these machinery despite the physics that allows them to actually fly isn’t as tractable as we are led to believe. The reason here is that our proxy for trust is the rigorous testing that were performed on these planes over the decades and its track record of reliability.

With AI, we have the problem of Goodhart’s law. Goodhart’s law implies that any proxy measure of performance will be gained by intelligent actors. What is needed of AI are more intelligent tests that ensure that what we expect to be learned is actually learned. This new methodology of teaching will absolutely be essential if we are going to deploy AI in tasks demanding human safety.

Then there’s the question of wether we select a solution that we understand well but doesn’t perform or the alternative, one that performs well but we don’t understand. It is likely, a majority of people will select the latter. A majority of people do not understand the details of how their mobile phone even works. In fact, most people can’t even differentiate between WiFi and mobile signal. Even worse, people assume that wireless signal is a natural part of the environment.

There are financial derivatives out there that have been purposely created by quants skills in the most obscure and complex mathematics. Yet financial firms have no issue selling these products to their customers. The only thing that apparently is importance it the track record and future performance of these obscure financial products. The firms are thus placing their trust on the skills of their quants. Despite this lack of understanding, financial derivatives are a very lucrative business.

In summary, explainability is a feature that is demanded as a requirement for trust. Establishing trust is of course a complex subject and has many aspects that have to do with human psychology. In general though, trust in machinery is a consequence of reliability that comes about through exhaustive testing. Ultimately, in the end, people will trust products that are known to deliver over ones that we understand. It’s only human nature.

This final conclusion does not mean that explainable AI is unnecessary or that we can’t create better explainable systems. The two areas that I mentioned above, how to create intuitive explanations and how to create AI testing (and also curriculums) are specific areas that require greater research. Where I see the mistake being made is this assumption that DARPA’s third wave “Contextual Adaptation” AI or what I would call “Intuitive Causal Reasoning” leads automagically to explainable AI. The limitation is not in the machines, the limitation is in our own ability to understand complex subjects.