Setting up Tensorflow-GPU/Keras in Conda on Windows 10.

I recently got a new machine with an NVIDIA GTX1050 which has since made my deep learning projects progress much faster. However, getting Tensorflow up and running using the GPU was a little bumpy so I wrote this guide to get you setup quicker so you can focus on the more important things.

1. Verify your GPU is supported & update its driver

The first thing to do is verify that you have a CUDA enabled GPU. You can check the list here to make sure it’s supported. You’ll need a card with a compute of 3.0 or higher for Tensorflow.

Once you’ve got that part figured out, head over to the NVIDIA website and download/install the most recent version of your GPU driver. Here’s a link to the GeForce drivers. I found the easiest thing to do was let the website autodetect your GPU.

2. Install Visual Studio

The next thing to do is install Visual Studio because… dependencies. To be honest, I don’t know if you really need this unless you’ll be developing directly in CUDA. However, the CUDA Toolkit installation throws a warning if you don’t have this installed so I decided to set it up. As a bonus, you can use it to build some of the sample code that ships with the CUDA Toolkit but more on that later.

You’ll need one of the versions listed in the table below installed on your machine.

http://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html

The easiest thing to do is to download Visual Studio Community 2015 which is free. Here’s a link to a Stack Overflow response on where to easily get this. Download the installer and follow the prompts to get this setup on your machine.

3. Get the CUDA Toolkit

Note: TensorFlow 1.5 was just released which includes support for CUDA 9.0 and cudNN 7. However, I’ve kept this write-up focused on CUDA 8.0 and cudNN 6. You can follow the same steps for either version.

You’ll first need to install the CUDA toolkit on your machine. The TensorFlow documentation currently recommends using CUDA 8.0 which is available on NVIDIA’s website here. Download both the base installer and the patch. Here are some step by step screenshots of what you might see. Make sure to select the Custom installation so you can keep your updated driver versions.

This may happen if your driver version is more up-to-date than what’s bundled here. Safe to ignore.
You’ll see this only if you don’t have a compatible version of Visual Studio installed.
Select only the CUDA environment to keep your updated driver

The rest of the installation should run smoothly. Once the main installation is done, run the installer for the patch.

4. [Optional] Verify CUDA is communicating with your GPU

Once you’ve got CUDA setup, you can use the deviceQuery application that comes as part of the sample code provided with the installation. Unfortunately, the sample isn’t shipped as a pre-built binary for the Windows installation so you’ll need to build it using the provided visual studio solution file.

There are several solution files located at the following path (for CUDA 8.0):

C:\ProgramData\NVIDIA Corporation\CUDA Samples\v8.0\1_Utilities\deviceQuery

You’ll need to launch the file corresponding to your installed version of Visual Studio which is deviceQuery_vs2015 if you’ve been following along.

Visual Studio might complain about some updates when you launch it for the first time so get those out of the way first. Once that’s done, relaunch Visual Studio, switch to Release mode and then build the binary by click on Build -> Build Solution in the menu bar. The code will compile with some warnings but you should get a “succeeded” message at the end.

The built binary will then show up in:

C:\ProgramData\NVIDIA Corporation\CUDA Samples\v8.0\bin\win64\Release

The final step is to open up a command prompt and navigate to the path above to launch the deviceQuery executable. You should see an output that looks something like this if CUDA has is communicating properly with your GPU.

The key message here being Result=PASS

5. Set up cudaNN

The next step is to download cudNN 6.0 for CUDA 8.0 from the NVIDIA website here. You’ll need to sign up for a developer account to be able to download the library. The download will be in the form of a zip file, extract the contents and follow the instructions below to copy the files to the correct locations. Note that the instructions below are for CUDA 9.0, however, there will be an equivalent folder path for CUDA 8.0.

http://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-windows

6. Setup Miniconda

Alright! Almost there, all that’s left now is setting up your conda environment with the right packages and off you go!

Download and install Miniconda from here. If you haven’t used Conda before, now’s a great time to start. Here’s a link to their documentation to give you an overview of what it does. Once the installation is complete, go ahead and add \Miniconda3\Scripts\ to you PATH environment variable so you can get into your environment from a command prompt without having to use the conda bash. This is more of a personal preference.

Once that’s done, execute the following command to setup an environment with some basic data science packages:

conda create -n workspace-gpu python numpy pandas matplotlib jupyter scikit-learn scikit-image scipy h5py

where workspace-gpu is the environment name. Let’s install OpenCV3 as well:

conda install -c conda-forge opencv

TensorFlow recommends using pip to install TensorFlow in conda so run the following commands to get Tensorflow and Keras:

pip install tensorflow-gpu==1.4.0

pip install keras

Note that I’m installing version 1.4.0 of TensorFlow which is compatible with CUDA 8.0 and cudNN 6. You can omit the version number if you installed CUDA 9/cudNN7 which will pull the latest version of TensorFlow. You can then install additional packages in your environment using:

conda install <package-name>

or search for packages using:

conda search <package-name>

7. Make sure everything works

Once you’ve got all your favorite packages installed, enter your environment, launch the python interpreter and run the following:

import tensorflow as tf

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

You should see something along the lines of:

Sorry for the small font…

The other thing you can do is monitor the utilization of your GPU while training by running the following from a command prompt:

C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe

and that’s all folks!

Source: Deep Learning on Medium