The Epistemological License of Machine Learning Models

Source: Deep Learning on Medium

The Epistemological License of Machine Learning Models

A common refrain from ethicists and deep learning critics is the unavailability of knowledge about algorithmic decision processes. There exist ‘hidden’ neural layers within a deep learning-enabled AI, effecting a computational black box. Recent efforts by computer scientists have targeted the opening of the black box to inspection and interrogation. To interrogate such an AI would be to learn how it reached the decisions it did, important information for those technicians diagnosing and trouble shooting algorithms responsible for predicting medical conditions and guiding self-driving vehicles.

Automated Concept-based Explanation

A collaboration between Google and Stanford has resulted in the machine learning model dubbed “automated concept-based explanation,” or ACE. The goal of ACE is to provide human-meaningful visual concepts to users about how its image classifiers achieve their stated predictions, for example, that an image of a cat probably does depict a cat.

“ACE identifies higher-level concepts by taking a trained classifier and a set of images within a class as input before extracting the concepts and sussing out each’s importance. Specifically, ACE segments images with multiple resolutions to capture several levels of texture, object parts, and objects before grouping similar segments as examples of the same concept and returning the most important concepts.”

This methodology is distinct from ‘explainable’ machine learning, which commonly provides explanations for automated decisions that are based on data and data structures which themselves are not inherently meaningful to human users, such as individual pixels or unstructured collections of pixels, in the case of an image classifier.

In philosophical jargon, there is a distinction in the epistemological license between explainable and interpretable machine learning models. How might a philosopher characterize this difference in license?

Sample graphic from Google’s Automatic Concept-based Explanation (ACE) system.

Characterizing Epistemological License

To have epistemological license is to have some reason for believing a fact to be true. Epistemological license can offer direct or indirect support for a conclusion, and that support can be either logical or empirical. I have direct epistemological license for believing that if 2x+1=7, then x=3, due to facts about mathematical logic. If I am sitting in a windowless office and hear thunder, followed by witnessing a co-worker walking into the office from outside and her clothing is drenched, I have indirect epistemological license for believing it is raining outside, due to empirical facts.

On the issue of interpretable machine learning models, Cynthia Rudin contrasts their benefits with the commonly held shortcomings of merely explainable models. To be explainable, an algorithm must provide explanations. But explanations “cannot have perfect fidelity with respect to the original model. If the explanation was completely faithful to what the original model computes, the explanation would equal the original model, and one would not need the original model in the first place, only the explanation. […] This leads to the danger that any explanation method for a black box model can be an inaccurate representation of the original model in parts of the feature space.”

“[A]n interpretable machine learning model [however] is constrained in model form so that it is either useful to someone, or obeys structural knowledge of the domain, such as monotonicity, causality, structural (generative) constraints, additivity, or physical constraints that come from domain knowledge. Interpretable models could use case-based reasoning for complex domains.”

To a philosopher, the two machine learning models (MLM) present differing degrees of fineness of knowledge, and more or less structured information. An explainable MLM may report individual pixels, for example, to which a decision hinges. This is presentation of fine-grained knowledge, but is less useful to human users and exposes a vulnerability of, for example, image classifiers. That is, the decision process is easily manipulatable, sensitive to small changes in input conditions.

Interpretable MLMs, by contrast, offer reports on more coarse-grained knowledge, but this knowledge is more robust, and is paired with human-meaningful concepts. That human meaningful concepts can be presented to the user is due to the presence of additional structure. Structure, in this sense, has a distinctly mathematical flavor, referring to ‘objects’ which endow sets-collections of objects like individual pixels-with meaning and significance. For example, in the graphic above, the texture of a tennis ball as a characteristic feature.

To the philosopher, both explainable and interpretable MLMs are granted their epistemological license on much the same foundation, which are the facts of data and data structure. Where the difference lies is with the presence of additional structure in the case of interpretable models. This structure offers an additional layer of indirect epistemological license, both logical and empirical. Logical, in the way that a MLM may be designed to follow certain structure building processes, as noted by Rudin. And empirical, in the way that concepts are often associated with real and meaningful facts about the world.

For additional content from Futures Limited.

For additional information, please see:

Explaining Explanations: An Overview of Interpretability of Machine Learning

Towards Automatic Concept-based Explanations

Google’s AI explains how image classifiers made their decisions

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead