Hardware support for Deep Learning

Original article was published by Uttaran Tribedi on Deep Learning on Medium


Hardware support for Deep Learning

Apart from CPU what are the different hardware’s that provide support for deep learning applications on our computers? Let’s explore..

GPU or Graphics Processing Unit

The use of GPU’s have existed for gaming applications (popularly known as graphics card) in computers because of their ability to handle lots of parallel processing using thousands of core. They have high memory bandwidth to handle colossal amounts of data and hence they are used in the field of deep learning to aid the CPU.

The popular GPU manufacturers are NVIDIA, Intel and AMD. NVIDIA outperforms the others because it is a dedicated GPU and not a co-processor like the other two. Some examples of desktop accelerators by NVIDIA are the GeForce series, TITAN V, Quadron GV100, etc.

NVIDIA GPU

VPU or Visual Processing Unit

It is a tiny GPU which is a dedicated highly parallel vector processor known for its low power consumption and its performance related to deep learning models for machine vision tasks like object detection, 3D mapping, etc. It’s usage ranges from smartphones, automobiles to drones, robots, etc.

The key manufacturers of VPUs are Movidius Inc, Synopsys Inc and Media Tech. Movidius, which is an Intel company has two major VPUs developed by it. One is the Myriad X VPU and the other is the Myriad 2 VPU.

Intel Movidius Myriad 2 VPU

NCS or Neural Compute Stick

NCS is a modular deep learning accelerator in a standard USB 3.0 stick. It can be used on a huge range of hosts. NCS supports both TensorFlow and Caffe. It allows anyone to test and deploy AI models locally. However, it can only be used for prediction/inference, not training models. It is more specifically designed for deployable models.

NCSs are used in security cameras, gesture-controlled drones, etc. The most popular NCS is the Intel Movidius Neural Computing Stick.

Intel Neural Compute Stick 2

TPU or Tensor Processing Unit

TPU is an AI accelerator, a custom application-specific integrated chip (ASIC) designed by Google for Machine Learning applications. Major products of Google like Gmail, Photos, Search assistant, etc are powered by TPUs. They are specifically designed for supporting TensorFlow. It is a co-processor so it cannot execute codes on its own but can support the CPU to do so. Google’s latest TPU architecture is Edge TPU. It is Google’s purpose-built ASIC chip designed to run machine learning models for edge computing, meaning it is much smaller and consumes far less power compared to the Cloud TPUs. The Edge TPU is capable of 4 trillion operations per second while using just 2W.

TPU