Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks…

Link to the paper


  1. This paper introduces a version of GAN called Deep Convolutional Generative Adversarial Networks (DCGAN), which are stable to train in most settings.
  2. This paper also showed that the generators have vector arithmetic properties which allows for easy image manipulation.


  1. GAN: Refer here
  2. Convolutional Neural Network: A convolutional neural network (CNN) is a class of deep, feed-forward artificial neural networks, most commonly applied to analyzing visual imagery.
  3. Batch Normalization: Batch normalization helps stabilize learning by normalizing the inputs to each unit to have zero mean and unit variance.
  4. ReLU Activation: ReLU (Rectified Linear Unit) is an activation function where f(x)=0 for x<0 and f(x)=x for x>=0.
  5. LeakyReLU Activation: LeakyReLU is an activation function where f(x)=⍺x for x<0 and f(x)=x for x>=0. Here ⍺ is called leak and it helps increase the range of ReLU function.
  6. Dropout: Dropout refers to dropping out units from layers in the neural network. This is done for reducing overfitting in neural networks.


  1. Previous attempts to improve the performance of GANs to model images, using CNNs were unsuccessful.
  2. This paper tells us about the various layers and activations used by the authors in their network, for successfully modelling image distributions.
  3. The various settings and hyperparameters used by the authors, and their effects on the result, are also mentioned in the paper.


  1. A uniform noise distribution z is fed into the first layer of the generator, which can be a fully connected layer.
  2. This layer is used for matrix multiplication, and the result is reshaped into a 4-dimensional vector.
  3. For the discriminator, the last layer is flattened and then fed into a single sigmoid output.
  4. Pooling layers are replaced with strided convolutions for discriminator, and with fractional-strided convolutionals for the generator.
  5. To prevent mode collapse, Batch Normalization has been used.
  6. Batch Normalization is not applied to the output layer of the generator and the input layer of the discriminator, as it may lead to sample oscillations and also model instability.
  7. The generator uses ReLU activations for all layers except the output, which uses Tanh.
  8. The discriminator uses LeakyReLU activations for all layers.
  9. Dropout was used to decrease the likelihood of memorization.


  1. DCGANs were trained on three datasets, Large Scale Scene Understanding (LSUN), Imagenet-1K, and assembled Faces dataset.
  2. All images were scaled to the range of tanh activation function [-1, 1].
  3. Mini-batch Stochastic Gradient Descent (SGD) was used for training, with a minibatch size of 128.
  4. Weights were initialized with zero-centered Normal distribution with a standard deviation of 0.02.
  5. The slope of the leak was set to 0.2 in LeakyReLU.
  6. Adam Optimizer was used for accelerating the training process.
  7. Learning rate was set to 0.0002 and the momentum term &beta; was set to 0.5 for stabilizing training.

Area of Applications

  1. Generation of higher resolution images
  2. Vector arithmetic can be performed on images in Z space to get results like man with glasses — normal man + normal woman = woman with glasses.
  3. The use of vector arithmetic could decrease the amount of data needed for modelling complex image distributions.

Source: Deep Learning on Medium