Deep Learning Web App by fastai v1

Source: Deep Learning on Medium

Deep Learning Web App by fastai v1

This article is part of the “Deep Learning in Practice” series.

Abstract

This article explains how to create a Deep Learning Web App via the fastai v1 library and the Render service as presented in the fastai course and its online tutorial. After having recalled the 3 steps to follow (1. Training a model and exporting it, 2. Testing locally the Web App using the model, 3. Deploying online the Web App), we detail them on the basis of a jupyter notebook and a Web App that we created. This Web App allows to classify an image into 1000 categories. It uses the Resnet50 version of the Resnet model winner of the ImageNet competition in 2015.

ImageNet Classifier Web App created by Pierre Guillou with fastai v1 and Render

(Development) Training a model

Learning to develop a Deep Neural Network (Deep Learning model) is now available to everyone.

There are indeed many online courses (with videos, slides and notebooks) and universities courses to learn Deep Learning. It is impossible not to mention for example the specialization of Andrew Ng on Coursera, the Facebook course on Pytorch on Udacity, the Google courses, as well as Stanford University courses available online with a year lag as the course cs231n (see also online books as Deep Learning textbook and Neural Networks and Deep Learning).

These courses are high level and any DL passionate must follow them. They will teach you the theory behind the DL, the main architectures (ConvNet, RNN, GANs …) and the frameworks (scikit-learn, Tensorflow+Keras or Pytorch+fastai) to code and train your DL model in a jupyter notebook.

(Production) Creation of a service

Thus, thanks to this set of resources, to the use of a GPU (via that of your computer or by the use of a cloud service like AWS, Google Cloud, Crestle or Paperspace) and of much will, you will come to develop your DL model in a jupyter notebook (for example, an image classifier based on a ConvNet).

But then how to move to the production mode, ie the creation of a service such as a Web App or a mobile application? How to enable a non-DL specialist to use your model in his/her daily life via an interactive and easy to use interface?

After all, the interest of a DL model does not lie in its development phase but in its use in real cases.

Fastai | From development to production

The course fastai brings an answer to this question. Taught for the first time by Jeremy Howard and Rachel Thomas at the University of San Francisco in 2016 and updated every year, the fastai course explains DL through practice via online videos as well as a tutorial. It is based on a library with online documentation incorporating the latest training techniques of a DL model and has an online forum.

If this set of resources already sets fastai as a reference DL course in 2019, the presentation in the lesson 2 of how to create a Web App from your DL model definitely enters it into the “DL course by practice” category by allowing you to turn your model into an online service.

The 3 steps to follow are:

  1. Training a DL model and exporting it
  2. Testing locally the Web App
  3. Deploying the Web App on a server

Note: there are many online articles describing the creation of a Web App using a Deep Learning model, especially with Tensorflow + Keras or scikit-learn (see list at the bottom of this article). The object here is not to say that fastai is the only library for the creation of a Web App DL, which would be wrong. Our point is that the fastai course integrates the creation of such a Web App into its contents, which seems unique compared to other DL courses of the same level.

Step 1 | Training a DL model

If you are interested in DL, you necessarily know the ImageNet competition launched by Stanford University in 2010. This is the reference in image classification (and also today in segmentation). Each year, the best DL specialists from the best data companies creatively compete to propose a model approaching the 100% recognition of objects in images.

In 2012, the model AlexNet won the ImageNet competition. He has shown the effectiveness of ConvNet deep neural networks for this type of task. In the following years, new models came to improve its performance and in 2015, Microsoft won the competition with its model Resnet 152. Resnet (and its different versions (ie, depending on the number of hidden layers: 18, 34, 50, 101, 152) is today the reference model in image processing by DL, and as its parameters can be downloaded in online, it is very often used in Transfer Learning (ie, as a pre-trained model to create more specialized ones).

By using fastai v1, we were able to create a jupyter notebook “Pretrained ImageNet Classifier with fastai v1” allowing to recreate the mythical image classifier using Resnet, then to export it to an executable file.

Prediction of the 3 most probable categories with the classifier winner of ImageNet in 2015 (notebook)

Note: in the rest of this article, we will use this image classification model but any other model exported by fastai v1 can be used to create a Web App.

Step 2 | Test locally the Web App

Until today, fastai v1 offers 3 online services to put in production its DL model (ie to create a Web app using your model trained and exported via fastai). Each of these services has an online manual: Render, AWS BeanStalk and Google App Engine.

For its ease of use and free for 5 months, we used the Render service to deploy our ImageNet Classifier Web app. Here are the steps followed with version 1.0.42 of fastai v1 on Windows 10:

  1. Go to https://github.com/render-examples/fastai-v3 and download the file fastai-v3-master.zip from this directory by clicking on the green button “Clone or Download” (or make a git clone on your terminal) .
  2. Extract the zip on your test space. For example file:///C:/webapp/fastai-v3-master/
  3. Launch your Anaconda Prompt terminal and activate your fastai environment (activate fastai-v1 for example, if fastai-v1 is the name of your fastai environment).
  4. In your terminal, install via pip the necessary libraries to have a local Starlette server:
pip install Starlette
pip install aiofiles
pip install uvicorn
pip install aiohttp
pip install python-multipart

That’s it! You now have a local Web App server. You can test it by launching in your terminal the following command:

python app/server.py serve

All you have to do now is customize the different files server.py, index.html, style.css and client.js to test your application with your model. In particular, you must enter in the server.py file the file name, classes and the link to the file of your fastai v1 model exported in step 1, as explained in the “Per-project setup” section of the guide “Deploying on Render”.

Following the launch of the python app/server.py serve command, your Web App will be available in your web browser at http: // localhost: 5042 /

You can also view and interact with your Web App in a jupyter notebook using the following code:

from IPython.display import IFrame
IFrame('http://localhost:5042/', width='100%', height=520)

Step 3 | Install the Web App on a server

For this last step, we invite you to follow the “Deploy” part of the “Deploying on Render” guide.

The idea is that you need to install on your account Github the directory of your Web App, create an account on Render and link this directory on Github to your Web service on your Render account.

ImageNet Classifier Web App created by Pierre Guillou online

Other resources

List of articles online about creating a Web app with Flask and using a model Pytorch+fastai, Tensorflow+Keras or scikit-learn: