How to verify CUDA and cuDNN installation

Source: Deep Learning on Medium

How to verify CUDA and cuDNN installation

How to check if your CUDA and cuDNN installed correctly?

In 2019, most AI enthusiasts will opt to buy a Nvidia Graphics card to train their Deep Learning models.

It is always important to verify your installation so that your Deep learning framework can access your GPU for its computation and memory.

Open up your command prompt or terminal and enter two simple commands to verify.

nvcc --version

and

nvidia-smi

Take note of your

  1. CUDA version
  2. Nvidia driver version
  3. GPU memory
stdout of your nvcc and nvidia-smi commands

If you are using your GPU for training models, you would also install cuDNN as well along with CUDA.

Take note that a correct pair of CUDA and cuDNN versions is necessary in order for tensorflow-gpu to work correctly.

[Optional]A few ways to check cuDNN version.

method 1 to check cuDNN version
method 2 to check cuDNN version

To use your GPU for tensorflow, you can

conda install tensorflow-gpu

or

pip install tensorflow-gpu

To check if tensorflow can access your gpu, enter this at your notebook cell or python program

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
GPU is found

If tensorflow can see your gpu, tensorflow automatically direct the computation on your GPU.

Happy Modelling! 🙂

For more code+lifestlye+penang, visit my main blog.

Click here to visit.