Learn deeper with Deep Learning frameworks

There has been a lot of fuss in the air about deep learning,deep brain,deep mind,deep that deep this.The usage of the word deep has increased exponentially and deep learning is the first result when google search pops up its suggestions in its search bar.You probably have landed here because you already know what deep learning is all about and since deep learning is so deep we don’t want to deep dive in the code and get lost,thus deep learning frameworks comes really as a savior.When deciding which framework to use for your applications you should always follow “no free lunch theorem” and know that every framework has its own pros and cons.

At the end of the discussion just for fun I have incorporated multiple sources of comparison articles and benchmark results between different frameworks on the basis of running speed,ease of code,number of forks/starts on github etc.

Let us dive into these deep frameworks :

TensorFlow : When talking about deep learning frameworks this comes first and foremost in the list.It has been 26 months since its initial release and its already topping the charts in one of the most popular open source deep learning framework by Google Brain research team.If you clearly look at the definition of tensorflow in the website it says :

“ TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them”

The actual motivation behind TensorFlow was to create a software library for easy,fast numerical computation using data flow graphs using multidimensional data arrays(called tensors). If you use TensorFlow you will better know how tensors work and how these tensors run.While using TensorFlow you have to write a lot of code again and again and reinvent the wheel,that’s why to support high level layers of API’s over TensorFlow they have implemented tf.contrib module containing modules like nn(high level variant of tf.nn),keras( which is a high-level API for deep learning frameworks like TensorFlow and Theano,which is officially integrated with TensorFlow;we will talk about keras later),etc.

Image Source: Twitter

There is something else Google Brain team has been pushing after TensorFlow to accelerate deep learning research and fill the gap of TensorFlow’s slowness in certain scenarios that is called Tensor2Tensor,which is essentially a extensible library for use with TensorFlow.

One more exciting thing to try would be the visualization of your network and TensorFlow allows this by giving visualizing tools like TensorBoard.

New Update:Introducing TensorFlow Lite and Eager Execution

Summary: TensorFlow supports C++ and Python along to allow computing distribution among CPU, GPU. TensorFlow also has a research cloud called TFRC in which they provide around 1000 TPU’s to accelerate open research.Its slow as cited by many great deep learning scientists there and they prefer using layers like Keras,PrettyTensor,TFLearn etc to act as a high level abstraction over the great computational power of TensorFlow.

Caffe : Caffe was one of the most early veteran deep learning frameworks that came into existence.When you head over the site of Caffe you will see the URL containing “berkleyvision”,as it was created as a part of Berkley AI Research (BAIR) and stands out as a leader framework in computer vision applications.Here is where you can see a demo of how Caffe’s neural network library can be used to implement easy state of the art computer vision systems.As the website of Caffe claims and multiple google searches about caffe will lead you to conclude that caffe is really fast with images as its build with C++ codebase. It has Python and matlab bindings but its documentations are really poor.At times you are redirected to a google docs website where a long ppt of everything but to the implementation of Caffe nets are given,so you have to really give time to dig deep that when and where actual things come.Another painful thing is its installation which has multiple dependencies.So if you want to get started really quick with Caffe you have to find a good blog post that sequentially defines the steps to be done.

Caffe2 :On the other hand Facebook has created a new AI framework joining hands with Nvidia as a partner called Caffe2 which is built essentially over Caffe,thus Caffe2 improves Caffe in variety of directions like : Distributed Computing,extending Caffe2 to non-vision use cases and introducing caffe2 to mobile computing.Its docs are neatly and sequentially created based on levels:Easy,Intermediate,Advanced.Also adding multiple references to state of art neural network learning resources if you are a beginner.Most of the documentations are in Python but if you want more scalability and modularity you may opt to go for C++ in your production systems.

Summary:If you want pure C++ speed and availability to your systems and your use case is computer vision don’t hesitate to put Caffe in your production systems.Use Caffe2 if you want great documentation,non vision use case,distributed computing and mobile computing(there is a great documentation on Caffe2 website for integrating Caffe2 to android/ios devices)

Keras : “You have just found Keras”,these are the first few words you will find on the website’s homepage.This is my personal favorite so expect some biased writing in this section. ?

Keras is actually a high level abstraction library that works on top of TensorFlow or Theano.Keras is powerful because it’s really straightforward to create a deep learning model by stacking multiple layers. When using Keras, the user doesn’t have to do the maths behind the layers.

As a part of Udacity’s Machine Learning Engineer Nanodegree,I first came across Keras,it was simple,fluid and straightforward to understand how to chain every block.I built my capstone project using Keras and TensorFlow as back end.Define a model you need,add some layers to it,pick from available losses , optimizers and you are done.Finally use metrics to check your performance.when you want to get things working really fast,Keras enforces a minimalist approach and gets you started in a flash.Here is a comparison of code written in Keras using TensorFlow and raw TensorFlow.

Theano : The beginning of deep learning frameworks starts from Caffe and Theano.It is one of the earliest deep learning library released in 2007 but bundled with SciPy in 2010.I don’t see much of people using only Theano in their deep learning implementations but as I’ve tried once using Theano as backend with Keras gives faster results than using TensorFlow with Keras. This maybe due to its Python interface and integrations with Numpy and SciPy making computations of multi-dimensional arrays faster.Like TensorFlow its a low level library from numerical computation and optimization but its lagging behind compared to its alike libraries because lack of multi-GPU support.

MXNet : Accepted as part of Apache Software Foundation (ASF) Incubator mxnet is the one of the truly open deep learning frameworks.Its the only framework supporting multiple languages like Python,Scala,R,Julia,Perl and C++.Before selecting as an incubator to ASF MXNet was already taken by amazon as its reference library for deeplearning. Now called as Apache MXNet i think there is a lot more to expect from this budding framework.They have built an an API called Gluon which can be used to plug and play neural network building blocks, including predefined layers, optimizers, and initializers just like our beloved keras. ?

Also the community is in parallel working on creating deep learning tutorials from scratch as a part of a book called The Straight Dope. As much of the work is in progress you can also be a part of this open deep learning community by contributing to documentations,tutorials using MXNet. The pace at which this promising open net is growing I am sure it will have a lot to offer from faster implementation to GPU support to beautiful documentations to awesome learning resources and at last a truly open deep learning framework.

DL4J : Any Java fans out there?Feel like everyone is talking about deep learning in Python,C++,R, MATLAB and Java is still hibernating in Object Oriented Realm?You might be wrong.Here is to your celebration,”DeepLearning4Java” is a open source,distributed,deep learning library for the JVM. Not only Java it can run to any of the language supporting JVM like Scala,Clojure or Kotlin.To my surprise Deeplearning4j also has a Python API that employs Keras, and it also enables developers to import models from other frameworks via Keras.

Coming to the documentation part of dl4j,its comprehensive,beautiful and awe inspiring.I came to know about its documentation while writing this article and I am overwhelmed of this sweet serendipity.It has everything from Getting started at deep learning,types of neural network and the math behind it,description of Restricted Boltzmann Machines ,Deep Autoencoders are by far the most easy to understand and detailed descriptions I have seen.

Torch:Does anyone knows Lua?Torch is there to lighten up when you want to put GPU’s first.This framework became popular because of it was widely used in facebook research and DeepMind before it was acquired by google.This framework bases its results on extensive paralellization with fast scripting language LuaJIT, and an underlying C/CUDA implementation.

Pytorch: One might think is there something common between py**torch** and torch?Are they just implementations of Lua and Python over the same wrapper?The answer seems to be no.Pytorch is basically a Tensor library like NumPy, with strong GPU support.Also,these guys claim that they strictly support use of autograd which is a tape based automatic differentiation library that supports all differentiable Tensor operations in torch.With Reverse-mode auto-differentiation you can change the way your network behaves without building it from scratch again.At the core, it’s CPU and GPU Tensor and Neural Network backends (TH, THC, THNN, THCUNN) are written as independent libraries with a C99 API.For more clarifications you can read it here.Surprisingly this framework is a tough competitor to TensorFlow or Keras which have been loved by many because of it dynamic graph. In September, fast.ai announced to switch from Keras & TensorFlow to PyTorch. Jeremy Howard, founding researcher at fast.ai and former President and Chief Scientist at Kaggle, thinks that PyTorch will be able to stay ahead of the curve.

Microsoft developed an internal deep learning framework called CNTK and officially launched the 2.0 version in 2017 after renaming it to the Microsoft Cognitive Toolkit.In the benchmark published, it seems a very powerful tool for vertical and horizontal scaling.CNTK supports Python,C++ and BrainScript.

ONNX : Apart from these frameworks out there if you think you need to at some point of time switch to other framework or if it becomes hard to keep up with latest developments,ONNX a Open Neural Network Exchange comes to rescue.Announced in September of 2017 and the release of V1 in December, ONNX is an open format to represent deep learning models. This allows users to more easily move models between different frameworks. ONNX supports Caffe2, Microsoft Cognitive Toolkit, MXNet, and PyTorch from the start, but like with other open source projects the community already added a converter for TensorFlow and COREML as well.

Comparison and Benchmarks :

  1. TF vs CNTK

2. Comparisons :

Comparative Study of Deep Learning Software Frameworks

Benchmarking State-of-the-Art Deep Learning Software Tools

3. Keras vs CNTK vs Theano

4. DyNet : DyNet, a toolkit for implementing neural network models based on dynamic declaration of network structure.

5. Rankings and stars :

Source: Deep Learning on Medium