Original article was published by Abhayparashar31 on Artificial Intelligence on Medium
The world is changing day by day, if you say 10 years ago that after 10 years there will be computers which can recognize objects as we do then no one going to believe you. the same case will be with the deep learning era if you go in 19’s and told a developer that after 20 years later you can train your deep learning models under 10 minutes then they will be surprised to hear that or they will simply say that you are mad, but now it is all possible we are in 2020 and now we can train our deep learning models using GPU very fast.
What is CUDA?
CUDA is a parallel computing platform and application programming interface model created by Nvidia Corporation. It allows software developers and engineers to use a CUDA-enabled graphics processing unit for general-purpose processing — an approach termed GPGPU.
Why Use CUDA?
CUDA creates a path between our computer hardware and our deep learning model. It enables the power to speed up compute-intensive applications by using the power of GPU. When using Cuda the sequential part of the workload runs on the CPU, which is optimized for single-threaded performance while the compute-intensive portion of the application runs on thousands of GPU cores in parallel.
The NVIDIA CUDA Toolkit provides a development environment for creating high-performance GPU accelerated applications like developing a computer vision model or object detection model. With the help of CUDA Toolkit, you can develop, optimize, and deploy your applications on GPU-accelerated embedded systems. The toolkit includes GPU-accelerated libraries, debugging and optimization tools, a C/C++ compiler, and a runtime library to build and deploy your application and more.
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for Deep Neural Networks. It provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.
Many Deep learning researchers and framework developers worldwide rely on cuDNN for Fast model training and fast results. Cudnn allows us to focus on training neural networks and developing software applications rather than spending time on low-level GPU performance tuning. cuDNN accelerates widely used deep learning frameworks, including PyTorch, and TensorFlow.
How to install CUDA Toolkit in Windows??
To install CUDA Toolkit in windows you just need to have a CUDA enabled GPU card, Also you need to install the latest drivers for the same.
Now We need to install the latest version of the CUDA toolkit with CUdnn Library.
Visit the CUDA download page, then select your operating system then Architecture, Version, and then select .exe local.