What is OpenAI Microscope ?

Original article can be found here (source): Artificial Intelligence on Medium

What is OpenAI Microscope ?

Collection of visualization of significant layers of 8 Popular CNN models .

Source: OpenAI

Microscope systematically visualizes every neuron in several commonly studied vision models, and makes all of those neurons linkable. can support in below ways:

  • Visualizations can be obtained using opensource library Lucid(explained through code below) built with TensorFlow support by OpenAI.
  • Making models and neurons linkable allows immediate scrutiny and further exploration of research making claims about those neurons.
  • Can help researchers, biologist, pathologist and many more in diverse ways if seen in present context of pandemic situation.
  • Ultimately helping to get better insights of black box nature of CNN.

Who is this for?

  • Anyone who’s interested in exploring how neural networks work.
  • Shared artifacts to facilitate long-term comparative study of these models.
  • Bigger impacts for who are researchers with adjacent expertise neuroscience, for instance — will find value in being able to more easily approach the internal workings of these vision models.

How to use it? Can anyone point me at some interesting things?

The OpenAI Microscope is based on two concepts, a location in a model and a technique. Metaphorically, the location is where you point the microscope, the technique is what lens you affix to it.

Models are composed of a graph of “nodes” (the neural network layers), which are connected to each other through “edges.” Each op contains hundreds of “units”, which are roughly analogous to neurons. Most of the techniques we use are useful only at a specific resolution. For instance, feature visualization can only be pointed at a “unit”, not its parent “node”.

Top 8 Widely Used CNN Models

What if found something interesting?

You can also join the Distill slack (join link) #circuits channel for more detailed discussion of features and circuits.

Can I reuse these images? What‘s the license?

Visualizations generated by Microscope under the Creative Commons Attribution license 4.0 (CC-BY 4.0).

Lets See an Example

lets take my favorite and the popular ResNet neural network which won Image Net Challenge 2015 , ResNet which was so complex in terms of no layers w.r.t to its competitors . It solved the problem of image classification where the input is an image of one of 1000 different classes (e.g. book, birds, car, cup etc.) and the output is a probability vector of 1000 numbers.

With help of OpenAI Microscope, i can see a sample dataset and visualize the core architecture of ResNet v2 50 alongside the state of the image classification process on each layer and dataset sample used.

OpenAI also hopes that Microscope will contribute to circuits collaboration work being done to reverse-engineer neural networks through understanding connections among neuron.

More over OpenAI isn’t alone to have such library, many other libraries like Keras’s(https://keras.io/examples/conv_filter_visualization/), PyTorch’s Captum for Model Interpretability (https://github.com/pytorch/captum), Google along with OpenAi has released Activation Atlas https://openai.com/blog/introducing-activation-atlases/ (Colab code)

Other AI Microscope visualizations that are included are the 2015 image-hallucination software DeepDream, and synthetic tuning curves.

Lucid Library:

A collection of infrastructure and tools for research in neural network interpretability . Most Important it is research code, NOT production code.


  • Runs on Tensorflow 1.x
  • Needs GPU.
  • Can run it in Colab.
  • Support python 2.7 as per docs.

Let’s dive into some Code !!!

  1. Install, Import , Load Model:

now that library is setup and InceptionV1 model is loaded.

2. Visualize a Neuron:

Visualizations can be accomplished in three Components:

  • Objectives — What do you want the model to visualize?
  • Parameterization — How do you describe the image?
  • Transforms — What transformations do you want your visualization to be robust to?

3. Objectives Component:

4. Transformation robustness:

Tries to find examples that still activate the optimization target highly even if we slightly transform them. Even a small amount seems to be very effective in the case of images , especially when combined with a more general regularizer for high-frequencies . Concretely, this means that we stochastically jitter, rotate or scale the image before applying the optimization step.

See that with steps inner transformation enhance up

It stochastically transform a tensorflow tensor when you input a seed value see #Jitter 2. to understand logic run this -> ??transform.jitter

now playing with it produces this

5. Parameterization:

This changes which direction of descent will be steepest, and how fast the optimization moves in each direction, but it does not change what the minimums are. If there are many local minima, it can stretch and shrink their basins of attraction, changing which ones the optimization process falls into.

understand this

All of these directions are valid descent directions for the same objective, but we can see they’re radically different. Notice that optimizing in the decorrelated space reduces high frequencies, while using L∞ increases them.

Complete Code:


As i have worked in Computer Vision for quite some time i can relate , it might bounce over head while initial reading, so i would request you read these for gaining familiarity:

if you need me to explore more , share such stuff do comment below

Keep reading _/\_ and stay safe.