Conditional and Controllable Generative Adversarial Networks

Original article was published by Abhishek Suran on Deep Learning on Medium


Conditional and Controllable Generative Adversarial Networks

Understanding Conditional and Controllable GANs And Implementing CGAN In TensorFlow 2.x

Photo by Sunyu on Unsplash

In this article, we will be looking at conditional and controllable GANs, what’s their need, and how to implement Naive conditional GAN using TensorFlow 2.x. Before you read further, I would like you to be familiar with DCGANs, which you can find here.

Why Conditional GAN

Till now, the generator was generating images randomly, and we had no control over the class of image to be generated i.e while training GAN, the generator was generating a random digit each time i.e it may generate one, six, or three, we don’t know what it will generate. But will conditional GANs we can tell the generator to generate an image of one or six. This is where conditional GAN becomes handy. With conditional GAN you can generate images of the class of your choice.

How does it work?

Till now, we were feeding images as an only input to our generator and discriminator. But now we will be feeding class information to both the networks.

  1. The generator takes random noise and a one-hot encoded class label as input. And outputs a fake image of a particular class.
  2. The discriminator takes an image with one-hot labels added as depth to the image(channels) i.e if you have an image of 28 * 28 *1 size and one-hot vector of size n than image size will be 28 * 28 * (n+1).
  3. Discriminator outputs whether the image belongs to that class or not i.e real or fake.