Tutorial: A quick overview of tensorflow2.0

Original article was published by Tejas Sathe on Deep Learning on Medium


Tutorial: A quick overview of tensorflow2.0

very happy huh!

First, Let me tell you. This is just an overview of the tensorflow2.0. I don’t know about you but I was very confused about it when I started to learn. Many of the blogs and tutorials directly started with Functional API or Sequential API(Of Keras Framework) and I was like “ What is this?”, “What’s a connection between them?”, “What to learn first?” and all that.

But don’t worry I will solve your all confusion before having it. and huh I’m so sorry there is the third one too i.e Subclassing.

Don’t panic! It’s not the rocket science or something!

So let’s get started!

Me too! EXCITED huh

What is TensorFlow?

TensorFlow. is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. It is used for both research and production at Google.

Wait a minute. Its getting very theoretical now, Okay! And the paragraph above is copied from Wiki.

But you know this technical knowledge is also important. So I will stick to it,

On the more technical side, TensorFlow allows you to do computations on your PC/Mac (CPU & GPU), Android, iOS, and lots more places. Of course, being created by Google, it aims to bring massive parallelism to your backpropagation musings. The main abstraction behind all the magic is stateful dataflow graphs.

Your data flowing through a graph in TensorFlow

The graph above shows data flowing through a graph in Tensorflow. Although this is a basic mechanism of the TensorFlow. but this is now abstracted in Tensorflow2.0. You have eager execution handy now.

Wait a minute! What’s eager execution?

TensorFlow’s eager execution is an imperative programming environment that evaluates operations immediately(In short it allows us to execute operations in Pythonic way), without building graphs: operations return concrete values instead of constructing a computational graph to run later. This makes it easy to get started with TensorFlow and debug models, and it reduces boilerplate as well.

In Tensorflow 2.0, eager execution is enabled by default. Now you can run TensorFlow operations and the results will return immediately.

Run the above block in a notebook.

First, tell do you know about NumPy? Right!

NumPy is a python library used for working with arrays. It also has functions for working in the domain of linear algebra, Fourier transform, and matrices.

Keep it in mind. Tensorflow has also its own NumPy to work on an array. Kinda similar to NumPy but maybe more efficient. It’s called the Tensor.

What is Tensor?

A Tensor is a typed multi-dimensional array. For example, a 4-D array of floating-point numbers representing a mini-batch of images with dimensions [batch, height, width, channel].

Basic Operations on the Tensor, We will see in the next blog cause this is just an introductory blog. Interested can go and check here.

Hey, come on! Come on the Point

OKAY!

Let’s first install Tensorflow2.0

Go here to get the detailed explanation to install it in Mac/ Windows/ Ubantu.

My preferred way to run tensorflow2.0 is Google Colab. It provides us CPU/ GPU/ TPU support. In Google Colab, directly import the TensorFlow and it will return you the latest version.

import tensorflow as tf
# you can check version with this command
print(tf.__version__)

1. For beginners

The best place to start is with the user-friendly Sequential API. You can create models by plugging together building blocks. Run the “Hello World” example below, then visit the tutorials to learn more.

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

What’s Sequential API then?

It’s just arranging Keras layer on top of each other. As you can see in the program that first, we defined the Sequential Model from tf.keras. Now with the help of that, we can arrange layers. So, the first layer is the Flatten layer which takes the input of the shape of (28 X 28) and flattens it to the shape of (1 X 784). Later the first layer passes the processed input to the next layer i.e Dense layer with 128 units having ReLu activation. The last dense layer outputs the output of shape (1 X 10). you can learn more here and here.

In easy terms, Sequential API means “Making the wooden house with premade walls and roof. We are just putting it together.”

2. For experts

The Subclassing API provides a define-by-run interface for advanced research. Create a class for your model, then write the forward pass imperatively. Easily author custom layers, activations, and training loops. Run the “Hello World” example below, then visit the tutorials to learn more.

class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
model = MyModel()
with tf.GradientTape() as tape:
logits = model(images)
loss_value = loss(logits, labels)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))

What’s Subclassing API then?

It’s just making everything from scrap. We can take the help of a predefined structural Keras model and Keras layer class. Both the Keras model and Keras layer have a well-defined structure. you can learn more here and here.

In easy terms, Subclassing API means “Making the wooden house starting with cutting tree and then giving shape of walls and roof to wooden cuts. And then just putting it together.”

Are we done now?

Nah!

There is one more type left – Functional API

Functional API

The Keras functional API is a way to create models that are more flexible than the tf.keras.Sequential API. The functional API can handle models with non-linear topology, models with shared layers, and models with multiple inputs or outputs.

The main idea that a deep learning model is usually a directed acyclic graph (DAG) of layers. So the functional API is a way to build graphs of layers.

dense = layers.Dense(64, activation="relu")
x = dense(inputs)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)
model= keras.Model(inputs=inputs,outputs=outputs,name="mnist_model")
model.summary()

What’s Functional API then?

It’s just arranging different Keras layer. and you can reuse the already defined layer in different places and different models. you can learn more here and here.

In easy terms, Functional API means “Making two story house. As we know that We use roof of the ground house as a floor of the upper house ”

What we’ve done so far

Most importantly, you know a bit of TensorFlow. And different types of API in tensorflow.keras. Next up — Detailed Explanation of Sequential API with a real-life example.

References

Getting to Know TensorFlow
GitHub of Tejas Sathe Wikipedia of Tensorflow