Computer Vision with TensorFlow Part-2

Source: Deep Learning on Medium

Walk through the notebook

The notebook is availaible here. If you are using local development environment, download this notebook; if you are using Colab click the `open in colab` button. We will also see some excercises in this notebook.

First we use the above code to import TensorFlow 2.x, If you are using local development environment you do not need lines 1–5.

Then, as discussed we use this code to get the data set.

`fashion_mnist = keras.datasets.fashion_mnist(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()`

We will now use `matplotlib` to view a sample image from the dataset.

`import matplotlib.pyplot as pltplt.imshow(training_images[0])`

You can change the `0` to other values to get other images as you might have guessed. You’ll notice that all of the values in the number are between 0 and 255. If we are training a neural network, for various reasons it’s easier if we treat all values as between 0 and 1, a process called ‘normalizing’ and fortunately in Python it’s easy to normalize a list like this without looping. You do it like this:

`training_images = training_images / 255.0test_images = test_images / 255.0`

Now in the next code block in the notebook we have defines the same neural net we earlier discussed about. Ok so you might have noticed a change we use `softmax` function. Softmax takes a set of values, and effectively picks the biggest one, so, for example, if the output of the last layer looks like [0.1, 0.1, 0.05, 0.1, 9.5, 0.1, 0.05, 0.05, 0.05], it saves you from fishing through it looking for the biggest value, and turns it into [0,0,0,0,1,0,0,0,0] — The goal is to save a lot of coding!

We can then try to fit the training images to the training labels. We’ll just do it for 10 epochs to be quick. We spend about 50 seconds training it over five epochs and we end up with a loss of about 0.205. That means it’s pretty accurate in guessing the relationship between the images and their labels. That’s not great, but considering it was done in just 50 seconds with a very basic neural network, it’s not bad either. But a better measure of performance can be seen by trying the test data. These are images that the network has not yet seen. You would expect performance to be worse, but if it’s much worse, you have a problem. As you can see, it’s about 0.32 loss, meaning it’s a little bit less accurate on the test set. It’s not great either, but we know we’re doing something right.

I have some questions and excercises for you 8 in all and I recommend you to go through all of them, you will also be exploring the same example with more neurons and thinngs like that. So have fun coding.

You just made a complete fashion fashion MNIST algorithm which can predict with a pretty good accuracy the images of fashion items.