Before we start, I just want the reader to understand that this article aims at two things :
- Getting started with Google Colab ( and its free K80 GPUs 😛)
- Training a Generative Adversarial Network ( its alright even if you are new or unfamiliar with Deep Learning, just enjoy the outputs …ohh boy 😈 )
Now, that you know what you are in for, let’s dive straight into it :
What is Google Colaboratory ?
If you don’t have a decent enough GPU or CPU in your PC, Colaboratory is the best thing out there for you right now.
Colaboratory is a free Jupyter notebook environment by Google that requires no setup and runs entirely in the cloud. Often abbreviated as “Colab”, it is the best option available as of now. It is completely free for use and has many Python libraries and packages already installed within it. It also offers a terminal interface in the notebook itself, wherein you can install any other library via pip.
Colab is backed with the mighty K80 GPUs and also supports local runtime. This makes it a better option than any other cloud platform like Paperspace or AWS.
So go grab your Google Colab notebook right now ! 😸
What are ‘Generative Adversarial Networks’ ?
Generative Adversarial Networks (GANs) are the next big thing in Deep Learning. As the name suggests, a GAN is combination(s) of various deep learning models trying to compete with each other to fulfil the most basic need in any Deep Learning project : GENERATING MORE DATA TO TRAIN ON 😛.
Here, we will train a GAN which will consist of two models :
- The Generator. ( which will try to generate forged data from the existing data and will ultimately try to fool the detective into believing it to be the real data )
- The Discriminator. ( this will be our detective 😃, trying to catch out the forged data generated by the Generator)
Competing against each other, these two models will give us the best possible new and unseen data.
If you are more into Deep Learning and need more details about what a GAN is, you can follow this article later :
Let’s train a GAN ourselves !
From here onward, our discussion will follow the contents of my Colab notebook which can be found here :
This notebook already has all the code implemented and has been run once. You can either start from scratch with your own notebook or simply run the cells from this notebook step by step.
#STEP 1 :
Google Colab doesn’t have some of the dependencies for this project. So we will install them via pip using the terminal interface in colab. (Run first three cells )
Just to make sure that you’re actually getting the GPU support from Colab, run the last cell in the notebook (which reads : torch.cuda.is_available. If it returns a False value, then change the runtime settings from the top menu
Next, let’s import all the required libraries and create a logger class which will help us monitor our training progress. ( This is not that important and its okay if you do not completely get what’s being done in the logger class )
The dataset we would be using here is the MNIST Handwritten digits image dataset. So, let’s create the Dataset class, download the dataset and initialize the data loader which will feed the data to models iteratively.
Now, let’s create our Generator and Discriminator models. These will be 3-Layer deep neural networks with Sequential -> Linear -> Leaky ReLU functions in each hidden layer.
Next, let’s create the noise sampler (which we will feed to the generator), initialize the optimizers, initialize the loss function (we use BCE Loss for the min-max adversarial game between the generator and the discriminator). We will also define functions which will help us later in targeting or labeling real and fake images.
Then, let’s define the training functions for the Generator and the Discriminator models.
Finally, let’s cross our fingers and start training ! 😃
NOTE: Colab is known to have some issues with PIL modules. In case of any PIL related errors, just restart the kernel and re-run all the cells using the runtime options from the top menu.
Here we are supposed to train for 200 epochs which will take about 1.5–2 hrs on Colab. The notebook has already been trained on 139 epochs, and we can clearly see what output has it provided in just 139 epochs. You can go forward and train it for complete 200 epochs to get the perfectly written digits.
The output will be pretty much all noisy images initially, but after a few epochs you’ll see that the model starts learning patterns for the handwritten digits : (Right to left, progress in learning)
So, what did we actually achieve here ?
So far, we have seen how to easily use Google Colaboratory and how to implement a basic GAN using the GPU support from Colaboratory. I hope that this little project has left you confident enough to code, install external libraries, download data etc in a Colab notebook.
What was the outcome of hours of training ?
As we can see in the output of our training cell, we got brand new handwritten images by training this GAN. The most important thing to note here is that these newly formed images are NOT COPIED from the actual dataset (as ensured by our detective, the Discriminator ).
Rather they are original images generated by the Generator model. So, these are new images which were NEVER SEEN by our GAN model BEFORE the training. 😏
Is all of this even remotely related to Artificial Intelligence and what’s so special about Generative Models anyways ?
To put this forward in Layman’s terms, let’s look at this through an example…..
Since, we have used the handwritten digits data here, let’s consider the case where you want your A.I to read & identify and correctly classify human handwritten numbers as efficiently as possible. For that to happen, you need your model to have a look at as many handwritten digits as possible beforehand, that too in as many different writing styles & hand writings as possible. So, how would you like to procure more of these original handwritten digit images ? By running person to person asking them all to write all the digits and eventually collect thousands of images manually OR by letting a GAN generate original images under a couple of hours for you ? 😏
As you can imagine, this was a very simple example with a “Hello World” dataset of Deep Learning 😛 and the advantages of Generative Models are further magnified for huge and more complex datas.
I hope that this was a fun experience for you. If you are yet to start with Deep Learning, I’m sure this read has left you all eager and motivated to start your A.I journey and if you haven’t used Google Colab yet, I’m sure you’ll start using it confidently now !😃
Happy Coding !
Source: Deep Learning on Medium