Real World Hacks for Explainable AI

Original article was published by Praful Krishna on Artificial Intelligence on Medium


Real World Hacks for Explainable AI

To comply with regulation is one of the last reasons someone should think about making artificial intelligence more explainable. Sure, there are benefits — decisions made by a replicable, well-understood process are more trustworthy than those made by a black-box. However, making your models explainable rewards you in many many ways other than regulation. Most importantly, it makes your business managers happy by providing a higher return on investments.

The good news is that there are hacks your team can start working on right now to make things easier. And it should.

Explainable AI is about ROI

One of the challenges in training an artificially intelligent model is that there is no way to make incremental improvements. You run the model and if the accuracy is just below what would be acceptable, there is no way to ‘push’ it from that point. You will have to think about better mapping of your inputs/ outputs, or perhaps pick a more complex model configuration. And then you need to train all over again to see if your improvements made things better. The process is very iterative, and expensive. On the other hand, if there is more transparency and if it is easier to see what is not working, it is also easier to fix it.

Data scientists with scaled AI programs derive another benefit. It is very common to fail when chasing some large ambition. However, if things are explainable, there may be ancillary successes that become valuable. It’s like thinking that memory foam came out of NASA’s Space Shuttle program.

For example, let’s say at a bank a team is trying to answer the question: how much money should they spend in acquiring a customer? One of the considerations would be how much the customer is worth over her lifetime. This is a multi-faceted question with scores of inputs, multiple assumptions, and complex modeling. Today, in most of the cases this question is answered using heuristics and experience. An AI model may or may not work with any success. However, if the data science team used explainable models, in its effort to predict the lifetime value of the customer, they may stumble upon a very successful model for her credit-worthiness.

While explainable AI is important per se for regulation, it also helps make data science efforts more productive in that context. In regulated situations there often are stringent checks a process must go through. If these checks can be applied early, teams can know the likelihood of success of a process and prioritize accordingly. Explainable AI helps do that.

Explainable AI is a way of thinking that helps you get the most out of the AI dollars you spend. Here are a few ideas on how data science teams may think about it.

1. Don’t Use Deep Learning if Possible

I have written multiple times that learning using neural networks, esp. deep learning, is a very powerful artificial intelligence technique. You may want to look up this article, ‘AI and Data Science for Dummies,’ which builds on the idea that for the most part deep learning is simply a fancy way of applying statistics. In the world of business analytics, deep learning is frequently an overkill. Deep learning comes with an awesome strength that it’s complexity can easily be scaled. However, it also comes with the cost that its behavior is not easily explainable.

On the surface the preference to not using deep learning sounds like such a huge cost, but really isn’t. The majority of applied AI is just one step away from probabilities and statistics. Using algorithms like Naïve Bayes, Logistic Regression, Random Forest, etc. may not make you miss neural networks. Each of these come with the convenience that they are completely explainable.

Even if you must use deep learning, prefer simpler models over more complicated ones.

2. Decompose Your Problem

Solving a complex problem needs a complex deep learning model and a lot of data to train it. However, if you can decompose the problem into smaller parts based on your business experience, many of these parts may not need deep learning, or may very well be deterministic. Even if each part needs deep learning, now you are replacing one large black box with many small black boxes. This by itself is a big step towards explainability.

It is important to understand the cost of this hack. There may be some unique, counter-intuitive relationship between your inputs and the outputs, which you may miss out on by decomposing the problem. However, while there may be such corner cases, the advantage in making the AI explainable may more than pay for this loss. If it’s worth the resources, you may also think about running a parallel model in toto, and comparing the data to better understand its behavior.

3. Stress Test and Visualize Relationships

Let’s say you are stuck with a deep learning model that you can’t do without. In that case, how does the output change if you gradually change only one input at a time? What happens if you provide highly biased input? What inputs is the output most sensitive to? Can you group the inputs by some dimension you care about e.g. race, and then compare the outputs for each group?

This approach is no different than a business manager’s approach to a business problem. You need to stress-test it. You need to build the intuition behind what is going on. Again, plain old mean and standard deviation reveal a lot of insights. Some platforms like Google cloud offer these analyses as a feature. As always in the world of data science, nothing beats human intuition.

4. Business vs. Learning Rules

The last hack I propose is controversial and many do not agree with me. I propose that instead of using an AI model to run your business, use the model to craft some business rules, and use those rules to run your business. The idea is to use all the hacks above to form a more simplistic way of looking at the relationships between the input and the output. Then, use these simpler relationships. The advantage of this approach is to avoid surprises. You will be able to explain each and every relationship. You may be able to tweak things as and when necessary. This post explains this idea further: ‘How to Build an AI in a Day’.

5. Layer-wise Relevance Propagation

The fifth technique becomes a bit theoretical. It’s called Layer-wise Relevance Propagation (LRP). The basic idea is to start from the output layer of a deep neural network, bean count the weights working your way from each output node back to the input layer. It reveals which nodes are important or not, among many other things. Remember that this is all linear algebra only.

To be successful LRP puts some restrictions on the design of the network, esp. around activation function of the neurons. Unless you are really into some deep theoretical stuff, these costs shouldn’t really matter, but it’s important to understand them and plan accordingly from the outset. There are some other techniques as well.

It is important to recognize that in many cases non of the above may not be possible. Many times, taking a parallel approach with various models running together helps with better understanding the inner workings of the black-box called AI.

Suggested Read: Feature Engineering for Business Managers