Explainable AI — Solving the Black Box Problem : Panel @ RE•WORK Deep Learning Summit / SFO

Source: Deep Learning on Medium

Explainable AI — Solving the Black Box Problem : Panel @ RE•WORK Deep Learning Summit / SFO

As an extension of my work on Responsible AI, I am hosting a panel at the RE•WORK DL Summit on Jan 30, 2020 — A multi-talented panelists and an interesting topic, what else can one ask for ? Let me jot down some thoughts as a prelude to this panel …

As I had written earlier [here], in the bygone years, we could precisely list the steps to do a thing, call it an algorithm and the decisions it makes are very clear to us.

Unfortunately, no more …

Now we define an algorithm as an architecture … i.e. how to string a bunch (sometimes a yuuge bunch) of neurons together and then we let them loose over a large pile of data .. they learn and the result is a model … when we give it actual data, it churns out decisions

  • Model := Algorithm + Training Data
  • Decision := Model + Actual Data

We don’t exactly know what they have learned, don’t have a clue how they make decisions and more problematic, we can’t reason about their decisions like we do with algorithms … and that is where explainability comes in !

So what exactly is explainability ? It is extracting meaning out of the black box, using various techniques. Later below we will see some of them … Before that, some pragmas …

The basic question to ask is, can we trust our AI models to do the right thing and will they continue to do so ?

Unfortunately the answer is not simple. It has at least two dimensions — the type of explainability and the audience asking the question.

There is transparency (which is the internal workings of an algorithm, not necessarily the model — see definitions above), the interpretability (the analysis of it’s decisions), the explainability (of the model), the metrics (things like model drift, data distribution, exceptions and so forth), the counterfactuals and finally justification (of the decisions).

The audience includes the developers, the model reviewers, the AI governance, Fair and Responsible group, and the regulators. The developers and model reviewers would look at all of the above while the regulators are interested in justification, the Fair and Responsible group might look at the counterfactuals and the AI governance might be interested in the metrics. In short, an explainability framework has to satisfy a broad spectrum of constituents !

If all these do not convince you, may I suggest an excellent book ““A Human’s Guide to Machine Intelligence” by Prof. Kartik Hosanagar ? My review of the book is here. I also have a few more reference at the end of this post.

Moving on, explainability need to be incorporated in all the stages of an AI project, not just at the end. Easier to explain in a diagram below.

And, there are many mechanics and mechanisms — it is still a nascent domain that is evolving constantly.

And finally, some of the questions we will be discussing in the panel. If you have more, please let me know. I will try to add the notes from the panel after the event:

  1. What is Explainability?
  2. Does AI need to be explainable? If so, why so ?
  3. Who’s responsibility is it and when does it need to be considered in the process of designing and deploying systems?
  4. What are the benefits of interpretability?
  5. Why is it difficult to open up the black box for many industries?

And couple of good references:

  1. An Introduction to Machine Learning Interpretability by Patrick hall at H2O.ai [here]
  2. Interpretable Machine Learning, by Christoph Molnar [here]