From Tensorflow 1.0 to PyTorch & back to Tensorflow 2.0

Source: Deep Learning on Medium

I started my journey in Machine Learning around 2015 when I was in my late teens. Without any clear vision about the field, I read many articles and watched a ton of YouTube videos. I did not have any clue what the field was or how it works. That was the time when Google’s popular Machine Learning library, Tensorflow was released.

Tensorflow was released in November 2015 as an ‘Open Source Software Library for Machine Intelligence’. As soon as it was released, people started jumping into it. The number of forks in GitHub just went up exponentially. But there was a fundamental flaw in it: ‘Static Graph’.

Under the hood, Tensorflow used static graph implementation to manage the flow of data and operations. This meant that from a programmer perspective, you had to first create a complete architecture of the model, store it into a graph, then launch the graph using a ‘session’.While this was a plus for creating production models, it lacked the pythonic nature.

The community started to complain about this problem and the Tensorflow team created ‘Eager Execution’ to fix it. But still, it wasn’t added as the default mode. Tensorflow also had a lot (a bit too much) of APIs that it became confusing. There was also a lot of redundancy for the same function definition.