Transfer Learning — Life Just Got Easier!!

Source: Deep Learning on Medium

Transfer Learning — Life Just Got Easier!!

Pick your model, deploy it and play around!!

A web App for Image Classification using various Deep Learning Models.

In the recent time deep learning has achieved a lot of popularity because of its performance in various machine learning algorithms. Deep learning tries to model high level abstractions in information by utilizing multiple processing layers. Multi-Class Image Classification is a major research point with wide application prospects in Artificial Intelligence field these days. This course project describes the use of this learning methods like VGG16, ResNet50, Xception, etc.; to perform Multi-Class image classification and implement these deep learning algorithms on a large-scale Multi-Class Image Classification dataset from ImageNet yearly challenge task. We will be using a large trained deep convolutional neural network to classify the 1.2 million high-resolution images into the 1000 different classes. The implementation performs the categorization of various images into its most likely class and shows illustrative examples of the prediction probability. A web app is developed using python’s flask API to classify these images into different categories using the above-mentioned models. Our goal is to find a computational efficient algorithm which classifies the image correctly with a higher accuracy.

Transfer Learning:

If not for Transfer Learning, Machine Learning is a pretty tough thing to do for an absolute beginner. At the lowest level, machine learning involves computing a function that maps some inputs to their corresponding outputs. Though the function itself is just a bunch of addition and multiplication operations, when passed through a nonlinear activation function and stacking a bunch of these layers together, functions can be made, to learn literally anything, provided there’s enough data to learn from, and an enormous amount of computational power.

That’s where transfer learning comes into play. In transfer learning, we take the pre-trained weights of an already trained model (one that has been trained on millions of images belonging to 1000’s of classes, on several high power GPU’s for several days) and use these already learned features to predict new classes. The advantages of transfer learning are that, there is no need of an extremely large training dataset and not much computational power is required as we are using pre-trained weights and only have to learn the weights of the last few layers.

The reason why it works so well is that, we use a network which is pre-trained on the imagenet dataset and this network has already learnt to recognize the trivial shapes and small parts of different objects in its initial layers. By using a pre-trained network to do transfer learning, we are simply adding a few dense layers at the end of the pre-trained network and learning what combination of these already learnt features help in recognizing the objects in our new dataset.

Hence we are training only a few dense layers. Furthermore, we are using a combination of these already learnt trivial features to recognize new objects. All this helps in making the training process very fast and require very less training data compared to training a conv net from scratch.

Hardware Requirements:

· Quad core Intel Core i7 Skylake or higher processor

· 8GB RAM or higher (8GB is okay but not for the performance you may want and or expect)

· GPU: RTX 2070/ RTX 2080 Ti/ GTX 1070/ GTX 1080/ GTX 1070 Ti

· CPU: 1–2 cores per GPU depending how you preprocess data

· Hard drive for data (>= 1TB)

· An additional monitor

· PSU has enough PCIe connectors (6+8pins)

· Blower-style fans for handling multiple GPUs

· Motherboard with multiple PCIe slots

Software Requirements:

· Windows OS

· Python 3.6+

· Anaconda Navigator (Tool)

· Spyder IDE

· Flask API (Python)

· Numpy package

· Os package

· Keras package

· WSGIServer package

· Pandas package

· Pycharm IDE (For flask projects)

Detailed Design:

This project is used to develop a web app to classify images into different categories using a pre-trained model on the Imagenet dataset. ImageNet is a common academic data set in machine learning for training an image recognition system. The basic idea is to allow the user to upload an image from his/her system and run the different deep learning image classification models over it to classify in different classes with a confidence (probability) score.

System Architecture:

The overall architecture of this system is divided into 3 stages:

Stage-I: (Transfer Learning stage)

This is the initial stage of the project. A huge data (Imagenet) with class labels is taken and are trained using various deep learning models. These models learn by extracting patterns from the features and they undergo training.

Figure 1: Transfer Learning Stage (Stage-I)

The next step in working with neural networks is training. During training, the network is fed input data (images, in this case), and the net’s output, or guesses, are compared to the expected results (the image’s labels). With each run through the data, the network’s weights are modified to decrease the error rate of the guesses; that is, they are adjusted so that the guesses better match the proper label of an image. (Training a large network on millions of images can take a lot of computing resources, thus the interest of distributing pre-trained nets like this.)

Once trained, the network can be used for inference, or making predictions about the data it sees. Inference happens to be a much less compute-intensive process.

Stage-II: (Building Hybrid classification model)

In this stage, we load the pre-trained model weights we generated from stage-I. Then we use this weights to train our deep learning models ResNet50, VGG16 and Xception. Then we infer top 3 accuracy results from each classifier. Then, we ensemble the top 1 class output of each model and display the final result. This way the accuracy of predicting the correct class of the image significantly improves.

Figure 2: Building Hybrid classification model (Stage-II)

Stage-III: (Deploying the model as a web Application)

In this stage, we will be using Flask API of python to develop a web application to deploy our newly built deep learning image classification model.

Figure 3: Deployment of model as Web App (Stage-III)

We upload the image to be tested locally from the machine into the web app. The web app makes an API call, computes the model and returns back the output on to the user screen.

Implementation:

Resnet50 Model with Flask web App code snippet:

ResNet50 Model code snippet

Flask Web Application UI:

Flask web App for single Model (Resnet50)

Flask web App for our 3 Models:

Image Predictor web page for our 3 Models

Now that we have our 3 top deep learning models predicting the results on the same image (Each giving Top 3 predictions), we can choose the best prediction to be one with the highest probability.

In the above case, we can take the result to be “Chihuahua” predicted by ResNet50 model with 99.97% Accuracy.

The code for this project can be found at GitHub:

https://github.com/MaajidKhan/Deploying3DeepLearningModels-Flask-

References:

https://in.mathworks.com/matlabcentral/fileexchange/59133-neural-network-toolbox-tm–model-for-alexnet-network

[2] Lee, H., Grosse, R., Ranganath, R. and Ng, A.Y., 2009, June. Convolutional deep belief

networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th annual international conference on machine learning (pp. 609–616). ACM.

[3] Deep Learning with MATLAB — matlab expo2018

[4] Introducing Deep Learning with the MATLAB — Deep Learning E-Book provided by the

mathworks.

[5] Singh, N. and Singh, S., 2017, March. Object classification to analyze medical imaging data using deep learning. In 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS) (pp. 1–4). IEEE.

[6] Azizah, L.M.R., Umayah, S.F., Riyadi, S., Damarjati, C. and Utama, N.A., 2017, November. Deep learning implementation using convolutional neural network in mangosteen surface defect detection. In 2017 7th IEEE International Conference on Control System, Computing and Engineering (ICCSCE)(pp. 242–246). IEEE.

[7] Shiddieqy, H.A., Hariadi, F.I. and Adiono, T., 2017, October. Implementation of deep-learning based image classification on single board computer. In 2017 International Symposium on Electronics and Smart Devices (ISESD) (pp. 133–137). IEEE.

[8] Touvron, H., Vedaldi, A., Douze, M. and Jégou, H., 2019. Fixing the train-test resolution discrepancy. arXiv preprint arXiv:1906.06423.

[9] Chollet, F., 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1251–1258).

[10] Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105).