Computer Vision: CPU, GPU Allocation & Cloud Computing

Source: Deep Learning on Medium

Computer Vision: CPU, GPU Allocation & Cloud Computing

Oftentimes, many computer enthusiasts throw around words such as CPU and GPU without realizing the significance or even meaning of these computational devices. So, to start, I would like to clarify a couple of concepts.

CPUs Vs GPUs

A CPU is a central processing unit and it is often used to perform arithmetic computations and acts as the brain of the computer. Most modern CPUs have around 2 to 256 cores. On the other hand, a GPU is known as a graphical processing unit and, as its name suggests, it processes graphics. Generally speaking, GPUs start at a couple hundred cores and can have up to several thousand. However, these cores are smaller and so are less powerful each.

Photo by Magnus Engø on Unsplash

CPU & GPU Allocation

As noticed above, GPUs have a significantly higher amount of cores than CPUs. This is extremely important to note when delving into its use in the image processing and computer vision world. An image can have anywhere from a couple hundred pixels to several thousand. In most computer vision programs, these images or frames are processed through a pixel by pixel standing. This means that if only 8 powerful CPU cores were to process several thousand pixels, it would take time causing the program performance as a whole to go down. However, processing these same several thousand pixels with several thousand less powerful cores is much more efficient in the long run because there are more cores working on one task at the same time. While the GPU processes these pixels, the CPU is used to do any necessary arithmetic calculations.

Cloud Computing

Not everyone is blessed enough to have powerful machines with top of the line CPUs and GPUs. However, there exists an alternative: cloud computing. Cloud computing is when remote servers are used to process data. There are a couple of powerful options for cloud computing.

Google Colab: Google Colab is essentially a Jupyter Notebook within the web. Within Google Colab, you can write programs and run them using a remote CPU or GPU. Additionally, packages can be installed within Google Colab, making the process significantly easier. This is extremely powerful for users who do not have the money to spend on other cloud computing platforms such as AWS.

Amazon Web Services: Amazon Web Services is one of the most popular and in-demand cloud computing platforms out there. It has a wide variety of services available for virtually anything. Using AWS for machine learning is free for the first 250 hours.

Spell.run: Spell.run is something I recently came across during my searches for cloud computing platforms. It is a cloud computing platform specifically for machine learning. It seems to be very versatile, is relatively user-friendly, and has a powerful command-line interface.

A Special Look into TPUs

Most people have heard about CPUs and GPUs at least once their lifetime. However, the topic of a TPU is something foreign to 99% of the world. A TPU is a Tensor processing unit and is used specifically for machine learning purposes. A TPU is actually an option to use if Google Colab is used for cloud computing. TPUs differ from CPUs in that they several times faster. However, the primary drawback of a TPU is that the accuracy may start to diverge when creating models. Nevertheless, a TPU saves significant time through its extremely high processing power.