GAN Lab: Train GANs in the Browser!


There are many browser visualization tools that help machine learning learners gain intuition for neural network training concepts such as the TensorFlow training playground and Perceptron Learning Applets:

These browser visualizations provide a nice UI to visualize how neural networks learn over time including plots of the complex functions they learn.

This new tool for visualizing training GANs was very helpful for my understanding of the training process of Generative Adversarial Networks. I really like the visualization of the separation between the real and fake samples. The GUI will show you how the samples change over time and how the discriminator and generator change their parameters.

There are 5 data distributions to play with, a straight line, two clusters, a circle, three separate circles, and a line with dis-attached head.

The varying complexities of the toy datasets will give you a sense of how many epochs it typically takes for the generator and the discriminator to reach the Nash equilibrium.

I think the most useful part about this visualization is simply seeing how the fake samples molded and changes over time. You are also able to visualize the green/purple chart demonstrating the performance of the discriminator to separate the generated examples over time.

It was also interesting to see how the output of the generator eventually converged to a set of samples that wasn’t quite identical to the real data points.

Another really nice feature of this tool is the sequential labeling of the algorithm:

Discriminator Update

  1. Generator derives samples from noise
  2. Discriminator classifies samples (combine real and fake)
  3. Computes discriminator loss
  4. Computer discriminator gradients
  5. Updates discriminator based on gradients

Generator Update

  1. Generator derives samples from noise
  2. Discriminator classifies fake samples only
  3. Compute generator loss
  4. Computes generator gradients
  5. Updates generator based on gradients

I think that visualizing the steps of the algorithm in addition to the GUI of the samples and loss charts is a really great tool for understanding the GAN training process.

Finally, it was very interesting to see the discriminator and generator’s loss converge to the same value over many epochs. I think that this is the representation of the Nash Equilibrium that the minimax adversarial training is theoretically founded upon.

Conclusion

I am really glad I found this tool to help with my understanding of Generative Adversarial Networks, especially the training process: please check it out for yourself!


CShorten

Connor Shorten is a Computer Science student at Florida Atlantic University. Research interests in software economics, deep learning, and software engineering.

Source: Deep Learning on Medium