Deploying a Pytorch ML model in a Flask Based Web Application

Original article can be found here (source): Deep Learning on Medium

Deploying a Pytorch ML model in a Flask Based Web Application

After training and testing a Machine Learning model the next stage is to deploying the model into live. In this article I will tell you how to deploy your machine learning model.

There is various ways you can deploy your machine learning model and host it in a website. we can leverage cloud providers like GCP, AWS and expose your model to a https endpoint. Then you can invoke the endpoint and you can infer the model in real time. But That will cost you a lot due to the billing in GCP and AWS. So in this case I’ll tell you how to create flask based web application where ML model will work in back-end and then how to deploy it to Heroku which free of cost hosting platform. So you will able to host your model in a website and interact with simple web UI.

The end result will look like this:

Here you can submit a image of food and a CNN network in back-end will classify the image.

https://food-finder-arka.herokuapp.com/

Prerequisites:

The whole code base is present there in my repository.

please install all the dependency specified in the requirements.txt file. You have to install Flask, PIllow, Numpy, Pytorch, torchvision and gunicorn(It is required for Heroku deployment).

CODE:

The main components of the webapp is app.py, commons.py and inference.py. You will find other components, they are necessary for Heroku deployment , I will explain them later.

app.py– It is the entry point of the web application.

Those who are not very familiar with web application. This the page where we will specify how our web application will work, when we invoke the application.

First we import the necessary libraries. get_flower_name is the function which is perform inference on the input image. It is present in the inference. py file.

Then I have specified the route, it means which path will point to which page. It is simple web application so it will have single page when we open the homepage and we have specified the methods available in the page. Here both GET and POST are available because we are both receiving and sending data to back-end.

Then I have defined a function which will control the interaction of back-end with front-end. So when we open the application we will get redirected to the index.html page and when we submit the image we will get redirected to the result.html page. For Flask you have to store the both those pages in the templates folder.

if __name__ == ‘__main__’: app.run(debug=True,port=os.getenv(‘PORT’,5000))

this part ensures that if we make changes to the code when the application is running in the local host it will automatically reflect that. You don’t have to close the web server and restart it.

Commons.py:

This python file contains two function which will take care the pre-processing part- get_tensor() and get_model() in which we will load the model from the saved model from .pt or .pth file.

in get_tensor() function we have the transformation that we have to put our input image to make it suitable for our model. As here my model is IMAGENET pre-trained model densenet121, I have specified the default transformation for pre-trained models. In this part you can customize it according to your model.

in get_model() function we are loading the model from the saved file named as classifier.pt, here we are downloading the model architecture fro torchvision library, changing the classifier as per the saved model and loading it into model object. If your are using your own model please define its architecture and load it into saved model using load_state_dict method.

Note that I have used location as CPU as we will infer our model on CPU as web application is not cuda enabled and you have to also specify strict=False.

inference.py:

In this python file actual inference is happening. We have already created all the necessary helper functions in the commons.py.

the first json part is necessary for mapping index of your model output to the actual class name, you can customize it according to your need.

We imported those to functions get_model() and get_tensor(). Then we are defining the main inference function get_flower_name() which we will use for prediction in the app.py file.

This function will take uploaded image as input, pre-process it and perform prediction and return it to the app. py which in turn show this as output in the result page.

Running it in local server:

Now our application is ready. we can invoke by simply calling python app.py in our anaconda command promt(or your own python environment).

It will run in your local server http://127.0.0.1:5000. you can type it in the browser and you will see your application working. During first invocation it takes some time as downloads some dependencies.

Activities before Deploying in Heroku:

Up to this point we have successfully created flask web application which is giving prediction successfully and running it in the local host.

now as for the next part we need to host it in the internet. And as part of free of cost option we will use Heroku.

Before deploying it into heroku, you need to create an account in the Heroku and install the application in the dekstop.

After it we need to add some files in the application folder for the deployment part. They are procfile,app.json, runtime.txt and requirements.txt.

you can use it as it is in there in the repository. In app.json you can change key-words as per your application but keep other files unchanged.

And to deploy your model into Heroku your model size should be under 100mb , so when you are using pre-trained model please use light-weight model like densenet or lower resnet. Resnet152 or vgg16 or any model whose size is more than 100 mb will not work.

Deployment Process:

This part is simple. You just have to type these commands for deployment.

first browse to your web application folder where all the necessary files, your code and the saved model is present in anaconda command promt. Then type following commands:

  1. Heroku Login — It will connect your account in heroku to the promt
  2. After executing follow the instructions in the screen
  3. After successful connection we can proceed. We have to create a git repository for deployment. So it is as follows:
  4. git init
  5. git add .
  6. git commit -m “your message”
  7. Heroku create your_app_name
  8. Now you will see your application repository getting created. This application name should be unique otherwise this step will fail.
  9. git push heroku master
  10. In this stage your web application will get deployed in heroku. it will take some time. If it is successful you will get the URL where you can access your application in the internet.

Like this:

If you are getting 404 error then wait some time as it takes some time to allocate the resouce.

Wow. Now you have your own Web application where you can interact with your ML model and It is live in Internet. If you find this helpful please star the Repo.

Note that I have used pytorch based model here you can use other framework also. You have to just customize some code for that and import framework specific library. But the basic flow is same.