Dog breed image classification using transfer learning

Source: Deep Learning on Medium

Dog breed image classification using transfer learning

Photo by Austin Kehmeier on Unsplash

Whenever I hear the words “transfer learning”, it reminds me of above picture where a big brother lends his hand to pull me up. Transfer learning exemplifies this behavior by learning from one task and allowing it to be used for other task with little bit of fine tuning.

Traditional ML Vs Transfer Learning

Traditional Machine learning, we always start from scratch. We gather big training dataset, use typical deep learning algo and train all the layers of the model. If you have another use case, you start from scratch again. So each model is trained in isolation with no reuse of any kind. This is resource and cost intensive. The size of dataset on each use case is huge and is nit trivial to create. In transfer learning scenario, we reuse model which was once on huge dataset like Imagenet only once by experts using Convolutional Neural Network. There are several predefined network architecture which are already available

  • VGGNET : 1st runner up in imagenet 2014 imagenet
  • RESNET: Winner in Imagenet 2015
  • Inception: Google. Winner of ILSVRC14

All of these model uses CNN behind the scenes. We then reuse these model, freeze all layers except top one, cut the top layer, add our own fully connected layer and train it using smaller image dataset under consideration. This requires a lot less data, training is much faster and accuracy is also much higher.

The reason leveraging existing network is possible becaus of the CNN learn features. At its lowest level, CNN layers learn to detect edges, at next layer it learns to detect parts of object and at last layers it learns to detect entire object. When you change training dataset, lower layers needs not changes because they need to do sam of edge detection and parts of image detection. Only top layers that does final classification need to change.

There are 2 types of transfer learning

  1. Freezing: Here we remove top layers, freeze the weights of all remaining layers, add new fully connected laters and retraining it smaller st of images. We use this technique when data similarity is high and data size is small.

2. Fine Tuning: Here we freeze few layers and then fine tune remaining layers using another dataset. We use this technique when data similarity is low and data size is small.

Use Case:

We are going to use Inception V3 model and fine tune it for classifying dog breeds.

Here is our goal, given a image of dog, we would like to predict its breed.

The training data that we use in this case is Stanford dog breed data. There are 10,222 images of dogs classified into 120 breeds.

We then load inception model V3, chop off its head, freeze lower layers and add new fully connected layers

Now the model is trained , we can observed its performance. As you can see validation loss and accuracy have improved over time.

We can now test model on test data

The output breed for spefcied images can now be displayed right on top of the image

Thus we saw that how quickly we were able adapt Inception V3 to learn to classify dog breed data.

If you would like to l how to use inception model V3 using step by step video with complete code, please sign up for my course at https://www.udemy.com/course/draft/2831200/?referralCode=B6AD67F151FE104FDBA6

About Author Evergreen Technologies:

•Linked in: @evergreenllc2020

•Twitter: @tech_evergreen

•Udemy: https://www.udemy.com/user/evergreen-technologies-2/

•Github: https://github.com/evergreenllc2020/

Over 22,000 students in 145 countries