Explainable AI and Design

Original article was published by Jasmine E on Artificial Intelligence on Medium


Typically, attributions are visually represented by a table to breakdown what factors collectively contribute to the total risk. Such charts and technical language may be suitable to a technical audience but are not easily approachable for non-technical users involved with the models. Even for technical users, there are ways to more effectively convey the same information so it is readily perceived. Now let’s explore some design ideas to significantly enhance the human-friendliness of these explanations.

Narratives and visuals make explanations more approachable

Using a combination of storytelling narrative and visuals, we can make explanations more approachable using micro interactions and micro visualizations. In these examples, this user is able to see that they have a 30% risk of developing lung cancer within 5 years, with two supplementary XAI components to breakdown that prediction: Top Factors and Similar Cases.

The Top Factors component is a visually enhanced way to present the “feature attributions” above, whereas the Similar Cases component presents the above “Rule-based” explanation in a way that is more immediately understandable. Importantly, these components are interactive, and how the user interacts with these is not only valuable to them but also valuable to us as designers. Interaction design can inform the explanation algorithm what types of explanations the doctor or patient finds more valuable, and this can allow the explanation modules to adaptively improve over time.

Interaction design can also allow the users to drill down from the top contributor for more details. Taking the factor — smoker as an example, the user can scan the different categories of smokers and read that being an active smoker for 10+ years contributes 12% to the model prediction.

Top Factor — Smoker For 10+ Years

Rule-based explanations are especially valuable to allow domain experts to influence model development: rules are readily comprehensible to them, and they can judge whether a rule accords with their intuition, and perhaps they can edit the rule to improve it. The Similar Cases component (see below) presents the rule shown above using an interaction that makes it readily understandable: Here we catch the audience’s attention by the aesthetic representation of the data — the scattered dots illustrate the cases similar to the persona, and the red-orange color indicates the cases who develop the lung cancer in the next five years.

105 Cases Similar to You

Interaction design can help highlight anomalies

In the above example we showed the top contributions to a specific model prediction. Below we show the aggregate global contributions of various factors. This is especially useful for domain experts (e.g., doctors in this case) to “sanity-check” a model. For example a doctor may find it suspicious that the age bracket 25–34 has a disproportionately large impact. This may indicate an issue in the underlying training data or labels (e.g., certain races or geographies may be over-represented), and the doctor may flag this as incorrect. This feedback can be captured by the explanation system, and help improve the training labels, leading to a re-trained, more accurate model.