Simplest approach for Image Classification

Original article can be found here (source): Deep Learning on Medium

Simplest approach for Image Classification

https://miro.medium.com/max/1000/1*ATIx1SmkEH0FaL_5fMvX2w.jpeg

Today I am going to show you how to build an image Classifier with simplest deep learning techniques.

Outline

Create a CNN to classify images(From Scratch)

Create a CNN to classify images(Using Transfer Learning)

Training Datasets

Having a good training datasets is a biggest reward to begin with image classification. I am going to use datasets that were provided by the udacity as a Data Scientist Nano degree programs. Feel free to use your own datasets for better learning purpose. In this datasets, I am going to classify dog breeds.

I have set myself a target accuracy of 80% with training under 2 mins on GPU i.e, the model recognizes the dog breed 8 times out of 10 10 minutes.

Importing the Libraries and datasets

Importing some Libraries and Dog datasets

I am also going to import human datasets so that we can check how good our model is performing.

Importing human datasets.

I am going to start with simplest model to only detect human and dog. One of the simplest model is to use Open-CV to detect human faces and dog. After detecting the faces , I am going to draw rectangular boundaries across the faces.

Image Localisation with OpenCv

As you can see in above picture that I have imported necessary libraries. If you will look carefully then I am also importing an additional file which is haarcascades XML file. You can easily search on web for this file.

Now I am going to evaluate our model. Now I am going to write a function which we use to load images and detect faces.

Function to load Images

Now time to evaluate our model.

Model Evaluation

As we can see from above images that I has achieved 100% accuracy. But there is problem with Open CV that it can only detect images with forward facing. Open CV takes approximately 2 minutes to run on 100 images.

Now it’s time to give try to State of the art deep learning model. I am going to build Constitutional Neural Network from scratch.

Pre-processing the Data

We rescale the images by dividing every pixel in every image by 255.

Rescaling Images

Defining our model and Importing necessary libraries.

Model Images.

Now we are going to check no trainable parameters with model.summary()

Total trainable params
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

Starting training of model

Fitting our model

After the model training it gives an accuracy of approx 9% on validation data. Which is too low below our targeted accuracy.

accuracy image

I am going to give a shot to another deep learning technique called transfer learning. In which we are going to download pre-trained model with their weights and architectures.We are going to freeze all the weights and then adding our Dense function to train for our images.The model I am going to use is Xception net which is pre-trained for classifying 1000 categories of images.

Loading our model

Implementing the model architecture with own hidden layers.

Xception net model
Xception_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

fitting our model and starting the training.

model trainning.

As we can see from above image that our model is learning raidly and it is taking approx 1 sec per epoch. I am doing training with 20 epochs which is taking approx 20 sec. Now we are going to check our accuracy with validation data.

Our model is doing exceptionally well under 20 sec of training and we have also achieved our targeted model accuracy of 81.9%..

Reflection

At the start, my objective was to create a CNN with 80% testing accuracy under 2 mins of training on GPU. Our final model obtained 81.9% testing accuracy under the training of 20 secs.

There is still very high chances to increase model accuracy with following techniques:

Image Augmentation, Increasing Dense layers and Increasing no of epochs with Dropout to decrease the chances of model overfitting.

Following the above areas I’m sure we could increase the testing accuracy of the model to above 95%.