Source: Deep Learning on Medium
Ever since its inception, GAN has been the talk of the town among Machine Learning Practitioners and Researchers .It’s been described as the algorithmic philosophy which is going to drive the future innovation and growth in deep learning.
In statistical learning i.e. “learning from data approach” we deal with two types of models:
- Discriminative Model
- Generative Model
Classification and Regression are the two prime examples of Discriminative tasks where we have a training set and our model’s job is to make categorical or continuous value predictions i.e E(y|x),on the other hand with Generative model we want to generate a new sample of data which is identical in distribution with train data.In traditional statistical sense GAN can be labelled as two-sample hypothesis test but in practice GAN is more constructive and creative in approach.
Gan is the brainchild of prominent deep learning scientist Ian Goodfellow and his research colleagues. In Ian’s words:
A framework for estimating generative models via an adversarial process,in which we simultaneously train two models: generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came form training data rather than G.This framework corresponds to minimax two players game.
The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistinguishable from the genuine articles.
That’s what a GAN is: a forger network and an expert network, each being trained to best the other.
How to train your DCGAN(Deep Convolution GAN)
For each epoch, you do the following:
1. Draw random points in the latent space (random noise).
2. Generate images with generator using this random noise.
3. Mix the generated images with real ones.
4. Train discriminator using these mixed images, with corresponding targets:
either “real” (for the real images) or “fake” (for the generated images).
5. Draw new random points in the latent space.
6 Train gan using these random vectors, with targets that all say “these are real images.” This updates the weights of the generator to move them toward getting the discriminator to predict “these are real images” for generated images: this trains the generator to fool the discriminator.
Lets get our hands dirty with some code examples.Here, we are going to generate a collection of images which replicate FMNIST dataset.
. Discriminator is a binary classifier
.Cross entropy loss function is used to train D
.Generator draw some random param z,aka latent variable
.Generator applies the function G to obtain G(z)
.G strives to fool D so that D(G(z)) =1