COVID-19 Prediction from Chest X-ray using Deep Transfer Learning in Azure ML — Part 2

Original article was published on Artificial Intelligence on Medium

Image Credit : Screenshot from https://scitechdaily.com/

Introduction:

In this part-2 article illustrates about how to deploy our COVID-19 pre-trained model (in Part-1) into Azure ML workspace. As said, in Part-1 the mode trained in the local environment and saved our best model in .h5 file format.

In the Azure ML we will register our custom pre-trained model and deploy into ACI (Azure Container Instances) service; so that it can able to access by any other application or any Azure Services.

Please read my Part-1 article about COVID-19 dataset, Image Pre-Processing and Deep Transfer Learning Model building.

I’m hoping end of this article will help reader’s to understand how to train deep transfer learning model for any image classification or object recognition and how to deploy any custom built model into Azure ML Workspace.

Pre-request:

  1. User should have Azure subscription and Azure ML workspace should be created.
  2. Azure ML SDK (Ver: >= 1.4.0) should be installed in the local machine.
  3. Python (Ver: >= 3.7.3) should be installed in local machine along with Anaconda.

Step by step instruction to deploy our final model into Azure ML :

  1. Import all required library:

First, you need to import below given Azure ML related library into your notebook.

from azureml.core import Workspace, Experiment, Run
from azureml.core.model import Model
from azureml.core.authentication import InteractiveLoginAuthentication
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.webservice import AciWebservice,Webservice
import requests
import json
import base64

2. Get Azure ML Workspace Information:

Use below code to get your Azure ML workspace information into “ws”object. Here, user need to eneter your azure tenant Id for interactive login authentication and also enter your subscription id, workspace_name and your resource group. This information you can get it from your azure portal.

When you try to run below code its open azure interactive login screen for authentication.


interactive_auth = InteractiveLoginAuthentication(tenant_id="<Your Azure tenant Id>")

ws = Workspace(subscription_id="<Your Azure Sub scription Id>",
resource_group="<Resource Group>",
workspace_name="<Your Workspace Name>",
auth=interactive_auth)

2. Register the model :

Use Model.register method to register our trained model into Azure ML workspace. As per Part-1 we saved our trained model into .h5 file and same should refered here along with workspace object.

model = Model.register(model_path = "Covid_Final_Best_model.h5",
model_name = "Covid19Prediction",
description = "COVID19 Prediction using xception model",
workspace = ws)

Once model registered successfully you can see registered model information in your Azure portal, as per below.

3. Define inference configuration :

The inference configuration defines the environment used to run the deployed model. The inference configuration references the following entities, which are used to run the model when it’s deployed:

  • Entry Script : This file (named modelscor.py) loads the model when the deployed service starts. It is also responsible for receiving data, passing it to the model, and then returning a response.
  • Environment : An environment defines the software dependencies needed to run the model and entry script. Here, we should add required Conda dependency packages for our model.
# Create the environment
myenv = Environment(name="myenv")
conda_dep = CondaDependencies()

# Define the packages needed by the model and scripts
conda_dep.add_conda_package("pandas")
conda_dep.add_conda_package("numpy")
conda_dep.add_conda_package("scikit-learn")
conda_dep.add_conda_package("tensorflow")
conda_dep.add_conda_package("keras")
conda_dep.add_conda_package("opencv")
# You must list azureml-defaults as a pip dependency
conda_dep.add_pip_package("azureml-defaults")


# Adds dependencies to PythonSection of myenv
myenv.python.conda_dependencies=conda_dep

inference_config = InferenceConfig(entry_script="modelscore.py",
environment=myenv)

Entry Script :

The entry script has only two required functions, init() and run(data). These functions are used to initialize the service at startup and run the model using request data passed in by a client.

Here, init() method is a startup method in this we need to initialize “model” object and use keras.models.load_model function to load from our registered model file as per below.

# Called when the deployed service startsdef init(): global modelprint("init method has been invoked")# The AZUREML_MODEL_DIR environment variable indicates# a directory containing the model file you registered.model_filename = 'Covid_Final_Best_model.h5'model_path = os.path.join(os.environ['AZUREML_MODEL_DIR'], model_filename)# load modelsmodel = load_model(model_path)print("init method has been completed")

In Run(data)

https://gist.github.com/krishnakarthi/5ba4b7c61a35884fa749784431225932

4. Deploy the model into ACI:

deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 4)model = Model(ws, name='Covid19Prediction')
service = Model.deploy(ws, 'covid19webservice', [model], inference_config, deployment_config)

service.wait_for_deployment(True)
print(service.state)
print("scoring URI: " + service.scoring_uri)

5. Test Model using ACI Service: