Fast introduction to Tensorflow

Source: Deep Learning on Medium

Go to the profile of El_Rancho

Many of you have already heard it many times probably. Tensorflow is an open source software library for dataflow programming across a range of tasks, and it’s used especially for Neural Networks and Deep Learning. Tensorflow was developed by the Google Brain team.

Tensorflow logo

The Advantages

  1. It provides both Python and C++ API’s so it’s easy to work with it. There are other software (Keras,Pytorch etc..) that offer Python API’s.
  2. It supports both CPU and GPU’s computing devices. It’s very important since Deep Learning requires many computations and nowadays these calculations are performed by the GPU.
  3. It’s more faster of his opponents, thanks to his graph structure

The data flow graph


Unlike traditional programs, with Tensorflow you need to build the graph to programming the network. Thanks to this graph you will be able to build large scale neural networks and you will be able to do large computations across multiple CPU and GPU’s. Each node in the graph represents an operation (Add, Softmax, Reshape etc.), and the edges are multidimensional arrays (Tensors). Keep in mind that this graph represent exactly how Tensorflow builds the structure of the program in memory.

  1. Build the computational graph
  2. Executing the computational graph

The tensor have different ranks:

If you just initialize a tensor with [x], it will be a scalar, [x,y,z] will be a vector and so on and so forth. Probably you will work with matrices many of the times

This is all, at least from a very high level point of view. Now I provide a first example to better understand what’s going on.

Example made with Coolab

The first three rows build the graph, but they do not any kind of computation until we don’t start the session. tf is the abbreviated name for the Tensorflow library. You need to write tf.Session() only one time, if you write it another time you stop the previous session

There are 3 main types of element in Tensorflow:

  1. Constants values (tf.constant(c)). They cannot be changed after their definition.
  2. Variables (tf.Variable(x)). values that can be changed during the execution of the program. Remember one important piece of code that you need to put in before the session if you work with variables, init_op = tf.global_variables_initializer().
  3. Placeholders (tf.placeholder())are used to feed external data into a TensorFlow graph. It allows a value to be assigned later i.e. a place in the memory where we’ll store a value later on.
How placeholder works

The session must be closed when you have done your stuffs, but if you write the code in this way no longer needed:

Ok now, we take a look at the computational graph. If we want to see the operations that are made inside our graph we can write this piece of code:

When you work with the tensor board, it’s useful to specify again the name of the variable. Intuitively the tensor board is very useful in order to get a better view of the graph, so you can do debugging and try to optimize it.

This is basically the main structure of a Tensorflow program. These are very simple examples but I hope they serve to make you understand the main idea of this software.