Setting up a Ubuntu 18.04.1 LTS system for deep learning and scientific computing

A tutorial for anyone who might want to setup a Ubuntu 18.04.1 LTS desktop specifically for machine (deep) learning applications and scientific computing. Just the basics. This tutorial assumes you have:

  1. Fresh Ubuntu 18.04.1 LTS install. If not, I’d refer to this useful article.
  2. A GeForce 10 graphics card

Installing and updating packages from Canonical

In the Terminal, first update your package database then upgrade the installed packages on your system.

sudo apt-get update
sudo apt-get upgrade

Next, install some general packages used for coding, ML, and scientific applications (some extend beyond the uses detailed below).

sudo apt-get install vim csh flex gfortran libgfortran3 g++ \
cmake xorg-dev patch zlib1g-dev libbz2-dev \
libboost-all-dev openssh-server libcairo2 \
libcairo2-dev libeigen3-dev lsb-core \
lsb-base net-tools network-manager \
git-core git-gui git-doc xclip

Validate G++-7 Compiler and BoostLib are properly functioning

We can run a quick check to see that both the GNU C++ compiler and BoostLib are working as intended. Create a file called ‘test_boost.cpp’ and add the following code:

#include <boost/lambda/lambda.hpp>
#include <iostream>
#include <iterator>
#include <algorithm>

int main()
using namespace boost::lambda;
typedef std::istream_iterator<int> in;

in(std::cin), in(), std::cout << (_1 * 3) << " " );

Compile and execute the program.

g++-7 test_boost.cpp -o test_boost
echo 3 6 9 | ./test_boost

The output should be 9 18 17

Installing Anaconda 5 (Python 3.6 version)

From Terminal grab the most recent Anaconda3 install script, and use wget to put it in your Downloads folder, and install it to the default directory.

wget -O ~/Downloads/
sh ~/Downloads/

Follow the installation instructions:

  • Enter to read through license terms.
  • ‘yes’ to agree to license terms.
  • Enter to accept default installation location (/home/{User}/anaconda3), or specify another directory
  • ‘yes’ to prepend the Anaconda3 install location to your ~/.bashrc file
  • (Optional) ‘yes’ or ‘no’ to the VS code license.

Source the changes in the terminal, update conda, and check your Python version:

source ~/.bashrc
conda update --all

Generating a public key and setting up GitHub

Here SSH will be used in one small way for connecting to version control hosts like GitHub and BitBucket. First, generate a public RSA key using a passphrase you’ll remember.

ssh-keygen -t rsa -b 4096

This will generate a public key and an identifier in the ~/.ssh/ directory. Now, copy your public key to the clipboard.

xclip -sel c < ~/.ssh/

Login to your GitHub account, and under Settings click SSH and GPG keys and add a new SSH key

Adding the public RSA key for your Ubuntu desktop.

Give the key a title, i.e. dl-computer, and in the Key field you will paste your public RSA key. Add the key. Now you can clone/push/pull/etc via SSH.

git clone

Installing NVIDIA Drivers

More comprehensive instructions here. Check NVIDIA’s website for the newest driver version, remember that number. Run the following, substituting ‘nvidia-[version]’ below. You’ll first purge any old nvidia remnants, then add the personal packages archive for compiled graphics-drivers, and update the database. Note: 396.54 is the version that should work for this install for GeForce cards, but owing to a bug it has to be installed through the Software & Updates GUI.

sudo apt-get purge nvidia-*
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
# sudo apt-get install nvidia-396 # doesn't currently work

Install NVIDIA 396.54 (GeForce/Titan) through Software & Updates GUI instead of through terminal. Open Software & Updates > Additional Drivers and select “… nvidia-driver-396 (open source)” and apply the changes.

Click the correct package and click ‘Apply Changes’, may take a while to install. Restart afterwards.

Now restart your computer. Check that you can communicate with your graphics card by typing nvidia-smi in to the terminal, it should give you something that looks like this:

NVIDIA System Management Interface — useful for seeing details of GPU usage, especially when using memory intensive DNN models with millions of paramters.

Install CUDA 9.2

Install dependencies needed for CUDA, then download the CUDA install files.

sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev
wget -O ~/Downloads/
wget -O ~/Downloads/cuda_9.2.148.1_linux

Install the primary CUDA toolkit.

sudo sh ~/Downloads/cuda_9.2.148_396.37_linux

You’ll be prompted with the following:

  • Read license, and when done hit Escape and type ‘accept’.
  • ‘y’ to unsupported configuration.
  • ’n’ to installing the driver (already done above).
  • ‘y’ to installing the toolkit.
  • Enter for installing to the default path.
  • ‘y’ to symbolic links in /usr/local/cuda
  • ‘y’ to installing samples.
  • Path to samples: /usr/local/cuda-9.2

If the NVIDIA driver version is not supported it will display a warning to that extent and stating that it was not installed correctly. If this happens, make sure your NVIDIA driver was installed properly as detailed above.

Next, install the primary CUDA toolkit patch. This is a quick process.

sudo sh ~/Downloads/cuda_9.2.148.1_linux

Test out that CUDA was installed properly by compiling the sample programs

mkdir ~/cuda-testing
cp -r /usr/local/cuda/samples ~/cuda_test
cd ~/cuda-testing/samples
make -j4

Check that the number of programs created is >140, and that the compiled programs run successfully.

ls -l bin/x86_64/linux/release/ | wc -l
Successful execution of a CUDA program on the GPU. An unsuccessful execution will fail and toss error messages.

Install cuDNN 7.1.4 for CUDA 9.2

cuDNN is a library for deep learning, incorporating many GPU-enabled primitives that are used with TensorFlow/Pytorch/etc. To get access to the library, you’ll need to register an account with NVIDIA when prompted if you don’t have one already.

After registering, download cuDNN v7.2.1 Library for Linux

Now, install a dependency, extract the zipped tar ball and copy library contents to the CUDA path as follows.

sudo apt-get install libcupti-dev
cd ~/Downloads
tar -zxvf cudnn-9.2-linux-x64-v7.2.1.38.tgz
sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda-9.2/lib64/
sudo cp cuda/include/cudnn.h /usr/local/cuda-9.2/include/
sudo chmod a+r /usr/local/cuda-9.2/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

Installing TensorFlow and Pytorch environments

Testing GPU-enabled TensorFlow and Pytorch using Anaconda environments.

Create two separate conda environments for TensorFlow-GPU and Pytorch with a few other packages. First, TensorFlow (and Keras)…

conda create -y --name tfgpu python=3.6
source activate tfgpu
conda install -y -q -c tensorflow-gpu keras

After the environments have been created and installed, we can test the GPU-enabled versions of TensorFlow and Pytorch.

source activate tfgpu
python -c "import tensorflow as tf; print(tf.Session(config=tf.ConfigProto(log_device_placement=True)))"

Here, you should see terminal output indicating that gpu:0 devices were used.

TensorFlow successfully using GPU device (GPU:0) instead of CPU (CPU:0).

Next, for Pytorch.

conda create -y --name pytorch python=3.6
source activate pytorch
conda install -y -q -c pytorch torchvision cuda92 pytorch

Test that Pytorch is using the GPU with the following.

source activate pytorch
python -c "import torch; print(torch.cuda.get_device_name(0))"

Which should return the name of the device it will be using, in my case it returned:

Installing Docker CE

For dockerizing your builds. Can’t beat the straightforward official documentation.

Source: Deep Learning on Medium