Visualizing and Understanding Deep Neural Networks: Part 1

Original article was published on Deep Learning on Medium

With the growing complexity of AI models, the critical need for understanding their inner-workings has increased

Deep learning has led to unprecedented breakthroughs in many areas such as computer vision, voice recognition, and autonomous driving. It has proved very powerful at solving large-scale real-world problems in recent years and adopted in many large-scale information processing applications like image recognition, language translation and automated personalization. There is now hope that these same techniques will be able to diagnose deadly diseases, make trading decisions, and do many other things that will potentially transform our lives and many industries.

While a deep neural network learns efficient representations and enables superior performance, understanding these models remains a challenge due to their inherently opaque nature and unclear working mechanism. They are often considered as black box methods that perform assigned tasks for the users.

Without a clear understanding of how and why a model works, it is difficult for a user to determine when a model works correctly, when it fails and how it can be further improved. Hence, users treat neural networks as black boxes and cannot explain how mapping from input to output data was done or determine reasons for its predictions. This lack of transparency acts as a drawback to their application involving high stake decision making, especially in regulated industries where it is required to use techniques that can be understood and validated.

Additionally, automated decisions made by these models have far-reaching societal implications: widening inequality among social class and race and underpinning bias and discrimination in their systems. Consequently, transparency and fairness problems have been gaining more attention lately and efforts are being made to make deep learning models more interpretable and controllable by humans, including building models that can explain their decisions, detect model bias and establish trust and transparency in how they would behave in the real world.

Deep learning models are harder to interpret than most machine learning models as they learn representations that are complicated to extract and present in a human-readable form. While it may be right for certain types of models but it is not entirely true for a vision-based model like a convolutional neural network (CNN) because the representations learned by a deep network like CNN are highly responsive to visualization, in large part because they are representations of visual concepts.

This works propose a visual exploration tool — DeepViz, which uses explainable system approach together with image localization and visualization techniques to interpret the inference of a visual classification task that justifies the model decision using visual evidence, using the following methods. The tool jointly predicts a class label and shows why the predicted label is appropriate for a given image using visual evidence.

  1. Image Sensitivity: Useful for highlighting image region that attributed most to the classification decision by localizing the detected feature or object in the input image.
  2. Activation Graph: Visualizing intermediate outputs of the hidden layers to show how the network transforms an input through successive layers.