AI Transparency will Lead to New Approaches

Original article can be found here (source): Artificial Intelligence on Medium

AI Transparency will Lead to New Approaches

Historically, the issue of transparency comes down to one issue. Once an AI system is working, developers don’t know what’s going on inside. AI essentially becomes a black box with inputs and outputs. Developers can see that outputs usually correspond correctly to inputs, but they don’t know why. Then, when some erroneous output occurs, they can’t know why that occurs either. All of that makes such a system very difficult to improve. Enter new programs which can analyze the workings of an AI to explain what’s going on. In their famous 2016 paper “Why Should I Trust You?” Explaining the Predictions of Any Classifier,[by Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin], the authors present some surprising results. When a husky dog was misidentified by AI as a wolf, they probed the AI and learned that the reason for the mis-categorization was the snow in the image. Many of the wolf pictures in the training set also contained snow. As a result, the authors concluded that the AI had developed a bias that pictures with snow were likely to be wolves not dogs. Ordinarily, you might think that directly probing the internals of the AI would be the way to resolve this issue, but that approach remains elusive. Instead, a program such as the open-source LIME used in this example makes small changes to the input image to see which changes alter the output. Ultimately, changing areas of snow to something else changed the result, while changing areas of the dog didn’t. That leads to a conclusion that the AI’s decision must have been based on the snow. With a program like LIME, many AI professionals are learning that AIs are often making decisions for reasons we didn’t expect. Regrettably, they are not making decisions for reasons of intelligence. Given this current state of affairs, AI professionals are recognizing that today’s AI looks more and more like sophisticated statistical analysis, and less and less like actual intelligence. To get to next step on the road to genuine intelligence, consider Moravec’s paradox which can be paraphrased as: the simpler an intelligence problem seems, the more difficult it is to implement. And saying something is “so simple any three-year-old can do it” is the absolute kiss of death.

Posted on 7wData.be