Source: Deep Learning on Medium
Tensorflow with GPU installation made easy
Part 1.1 : Installation with tensorflow — GPU : Windows 10
Configuring tensorflow with GPU can become one of the biggest road blocks for a beginner, and in times it gets to the level of frustration that people drop idea of configuring with GPU and start using CPU version. But with right approach, CUDA with cuDNN configuring and integrating with tensorflow can be really easy and I learnt it the hard way. So I will be sharing the recipe to get import tensorflow with gpu happening on the first go, taking into account any errors that can come across as well.
This article is a part of tensorflow object detection api, So after you have completed the installation of tensorflow with gpu support, you can go back to to Part 1 and continue.
Step 1 : Check if you have required GPU and its being recognized by windows
Press ⊞ Win+R key and the type~ cmd and press OK. It will launch windows command prompt for you.
In command prompt, first change directory to NVSMI as path mentioned below and the type nvidia-smi
C:\Users\BIG1KOR>cd \Program Files\NVIDIA Corporation\NVSMIC:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi
We are going to install with CUDA 10.0 or higher which requires Nvidia GPU driver version 418.x or higher, So please verify it.
Step 2 : Check if you have required Visual Studio ≥ 2015
If you already have not used Visual Studio before then you have to install it. Good news is, it comes free. Please use the links below for vs_community 2015:
Note : I have tried and installed with VS Community 2015 but you can try with 17 and 19 as well
Note : Post this above displayed step, Choose defaults options if any prompt pops up during installation time. A PC reboot after installation of Visual Studio May be Required
Step 3 : Let us configure CUDA and cuDNN for tensorflow support
Currently at this time tensorflow version 1.15 is a good and stable version. For this we require CUDA 10.1 or higher and cuDNN compatibility also depends on CUDA version, So once you finalize CUDA version depending upon the requirements, then cuDNN get fixed by default. Also sometimes you might require two versions of CUDA, that is also possible to have, for example I use CUDA 9.2 for PyTorch and CUDA 10.1 for Tensorflow.
Step 3.1 : For getting the required CUDA and cuDNN version, check tensorflow requirements and then go to below links and download packages :
For downloading cuDNN package, you will have to signup with Nvidia developer which is a one time process as long as you keep same email id for any future download.
Step 3.2 : Once you have downloaded CUDA installer ( For example cuda_win10.exe ) Initiate installations, Click on the default option Express Installation and install it. If graphic cards and others things were properly installed it will straightaway install CUDA and it will only configure with that visual studio version which is installed in your system, for other visual studios it will show not installed.
Step 3.3 : Now we need to integrate cuDNN into CUDA .
A common question asked, what is difference b/w CUDA and cuDNN ?
CUDA is Nvidia’s language/API for programming on the graphics card. cuDNN is a library for deep neural nets built using CUDA. It provides GPU accelerated functionality for common operations in deep neural nets. You could use it directly yourself, but TensorFlow already have built abstractions backed by cuDNN. So now we need cuDNN as well for using tensorflow-gpu.
- Step 3.3.1 : Unzip cuDNN.zip files
- Step 3.3.2 : You will find 3 folders in cuda folder of cudnn i.e; bin, library, lib. Also if you go to below path in your system,
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
you will find similar 3 folder among many other folder in v10.1. So now we have to copy/transfer similar content of cudnn folder to v10.1 folder of program files.
step 220.127.116.11: copy cudnn64_7.dll from bin to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\step 18.104.22.168: copy cudnn.h from include to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include\step 22.214.171.124: copy cudnn.lib from lib\x64 to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\lib\x64
Step 3.4 : Checking System Variables
To check system variables if they are properly set up or not, Press ⊞ Win key and then type environ….You will get an option Edit the system environment variables. Click on that and check the system variable box, It should look something like this for CUDA 10.1, If its not having CUDA_PATH the press New and create it. Press Ok when you are done.
Step 3.5 : Checking environment variables
To check system variables if they are properly set up or not, Press ⊞ Win key and then type environ….You will get an option Edit the system environment variables. On the upper side of that window you will see a Box with heading User variables for <User_Name>. In that box click on Path and click edit. We have to add following paths to the Path variable:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\includeC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\binC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\extras\CUPTI\libx64C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\libnvvp
Then press Ok and Ok again to close that window. We have successfully added CUDA files to our windows path.
Note : Post this above displayed step, a PC reboot is required, So please restart your pc
Step 4: Setting up a new environment for tensorflow-GPU in anaconda
I have already mentioned the reason for setting up a new environment, So in case you need it please refer to Part 1.
Launch anaconda command prompt (python3, 64 bit) and type the following command and press enter, type ‘y’ when prompted for permission. I have used name xyz_gpu, you can change it accordingly
base $$ conda create --name xyz_gpu python==3.6
after new environment is installed, you have to launch it with following command
base $$ conda activate xyz_gpuoutput :
Since now the new environment is set, Let us install tensorflow-gpu using following command.
pip install tensorflow==1.15
Right now tensorflow 1.15 is very stable and familiar, So I am sticking to it.
Once the installation is completed, We need to check if it is working correctly. Inside the xyz_gpu environment, launch python and run following command
import tensorflow as tftf.test.is_gpu_available(
)Output >> True
If you get output as true, then well and good, your tensorflow is properly and completaly configure. In case you get any error related to .dll files not found, please move to the next step
Step 5: .dll not found errors
Recently I was facing an error, as following .dll files missing:
But when I checked
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin , all the above mentioned files were present with a different ending i.e _101.dll. And I got to know that _100.dll are files from CUDA 10. So i downloaded those dll files from another system’s CUDA 10 directory and put in my system’s CUDA 10.1 bin folder (mentioned below) and Voila !! it worked. For everyone’s convenience, I am sharing link of zipped files containing those dll.
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
After everything is installed correctly, and you run tf.test.is_gpu_available code, you get output a shown in image below
Now you tensorflow is properly setup so that It can easily utilize the power of various NVIDIA accelerations in deep learning like object detection API.
Please give me claps if you liked this article.
I work as a Data Scientist in Bangalore, India. My interest lies in solving problem statements related to Computer Vision, Image Processing, Machine Learning and Deep Learning. Feel free to connect with me on Linkedin.