Learn to Code in TensorFlow2

Source: Deep Learning on Medium

Learn to Code in TensorFlow2

This article is useful for beginners who are facing difficulty in TensorFlow2. Assumption is that you know to write and train a basic deep model. Google Colab is recommended for these learning experiments.

We have trained a ResNet18 model in keras and TensorFlow2 for CIFAR10 data set to achieve 90% accuracy in least possible epochs.


We have a keras code (see the source code link) which we will try to convert to TensorFlow2 code, one step at a time. Please note that this exercise is only to get handy with TensorFlow2 and does not focuses much on achieving higher accuracy.


Content has been divided into three parts/blogs as mentioned below.


  1. Installing tensorflow2
  2. What is Eager execution?
  3. Loading of data (CIFAR10)
  4. Custom Data Augmentation techniques and visualization


  1. Defining a custom model (ResNet18)
  2. Viewing the Graph with TensorBoard for the defined model


  1. Brief understanding of Gradient Tape
  2. Train using custom loops

Installing TensorFlow 2

We will uninstall the existing TensorFlow and install TensorFlow 2.0 (GPU version). CPU version is also available.

!pip uninstall tensorflow
!pip install tensorflow-gpu==2.0.0-alpha0

Check the version. It should come as 2.0.0-alpha0

import tensorflow as tf
print (tf.__version__)

What is Eager execution?

Eager Execution : In short the simple code we will see here is possible due to eager execution. Traditional TensorFlow creates a graph and we need to run sessions to do something. In keras as well we compile the model and then start the training. But with eager we need not do such things.

It promises an alternative to the inflexible graph system, where we can simply start our file with tf.enable_eager_execution(), and then actually print out the values of tensors as we create them, without having to create a session.

Some blogs explaining the in detail:

It is by default enabled for tensorflow2.

>>> True

Loading of data (CIFAR10)

Data loading is very simple. It is the same way as we load in keras also. You can mention the dataset name and load it:

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data ()
len_train, len_test = len (x_train), len (x_test)
y_train = y_train.astype ('int64').reshape (len_train)
y_test = y_test.astype ('int64').reshape (len_test)

Custom Data Augmentation techniques and visualization

Since there is limited set of data, we use data augmentation techniques to give a different variation of the same image so that the model can generalize better. Techniques used:

  1. Normalize the images.
  2. Randomly flip the image horizontally with 50% probability.
  3. Pad the images by 4 pixels on each side to make it 40×40 and then take a random crop of 32×32 making the position of object uncertain.
  4. Take a random cutout from the image (i.e. put a square patch on the image with the mean of the image).

So this combination of augmentation makes sure that there is enough image variations and the model can generalize better. Below example illustrates the same: