Introduction to GPUs and Pytorch

Source: Deep Learning on Medium

Introduction to GPUs and Pytorch

Currently, deep learning gaining more popular. From autonomous vehicles to detecting cancer deep learning is every. It has profound applications in various domains.

Data is the fuel for deep learning. These days enormous amounts of data are generating in the form of images, sound, video, ECG, etc. Deep learning algorithms require large data and huge computations which requires powerful processors. VGG-19 a CNN (Convolution Neural Network) has 139 million parameters to train. GPUs (Graphic Processing Units) come in handy for deep learning computations. These days powerful GPUs are designed for training deep learning models.

Why GPUs for deep learning why not CPUs?

CPU vs GPU cores

CPUs have limited (up to 24) cores and can perform complex computations while GPUs have 1000’s of cores and designed to handle small computations. A core of CPU has more computing power while a GPU core has limited computing power. CPUs are preferred for sequential processing while GPUs are preferred for parallel processing.

NVIDIA demonstration on CPU vs GPU

Since neural network training can be very easily parallelizable, we use GPUs for training. Also, Training a neural network calculating derivatives and weights update which doesn’t require much computation. Earlier GPUs were designed for graphics design and gaming. It was first Steinkrau et al. (2005) implemented a two-layer fully connected neural network on a GPU and reported a three-times speedup over their CPU-based baseline.

Nvidia a popular GPU manufacturer created a parallel computing platform and application programming interface called CUDA in 2007. It allows software developers and software engineers to use a CUDA enabled graphics processing unit for general-purpose processing — an approach termed GPGPU. It was designed to work with C and C++ and later a python wrapper called PyCUDA was designed. Later in 2014, Nvidia came up with cuDNN(CUDA for Deep Neural Network) a GPU-accelerated library of primitives for deep neural networks. From then many deep learning frameworks like Caffe, Tensorflow, Theano, Pytorch, Mxnet, etc., were developed.

Deep learning frameworks

Why Pytorch??

Of all these frameworks TensorFlow by Google Brain and Pytorch from Facebook AI are quite popular. Both frameworks are written in C / C++ and supports python API.

Advantage of Pytorch are:

  1. Pytorch is more like a “pythonic ” way and working with Pytorch is like playing with numPy but on GPUs.
  2. Pytorch supports dynamic computation graphs (GCG) while tensorflow has static computation graphs(SCG).
  3. Pytorch has autograd feature, where gradients are computed automatically.
  4. Also, pytorch is more efficient and faster than TensorFlow.
  5. Pytorch makes debugging very easy.
  6. Torch allows data parallelism, can distribute data across multiple GPUs.

Pytorch is the most preferred framework among computer vision researchers. Of course, Pytorch has less community support compared to tensorflow. These days pytoch is gaining more attention in deep learning community. Below is the trend of deep learning frameworks on paperswithcode.com

Deep learning codes submitted on paperswithcode.com