Training Your First Distributed PyTorch Lightning Model with Azure ML

Original article was published by Aaron (Ari) Bornstein on Deep Learning on Medium


Training Your First Distributed PyTorch Lightning Model with Azure ML

TLDR; This post outlines how to get started training Multi GPU Models with PyTorch Lightning using Azure Machine Learning.

If you are new to Azure you can get started a free subscription using the link below.

What is PyTorch Lightning?

PyTorch Lighting is a lightweight PyTorch wrapper for high-performance AI research. Lightning is designed with four principles that simplify the development and scalability of production PyTorch Models:

  1. Enable maximum flexibility
  2. Abstract away unnecessary boilerplate, but make it accessible when needed.
  3. Systems should be self-contained (ie: optimizers, computation code, etc).
  4. Deep learning code should be organized into 4 distinct categories, Research code (the LightningModule), Engineering code (you delete, and is handled by the Trainer), Non-essential research code (logging, etc… this goes in Callbacks), Data (use PyTorch Dataloaders or organize them into a LightningDataModule).

Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code which is perfect for taking advantage of distributed cloud computing services such as Azure Machine Learning.

Additionally PyTorch Lighting Bolts provide pre-trained models that can be wrapped and combined to more rapidly prototype research ideas.

What is Azure Machine Learning?

Azure Machine Learning (Azure ML) is a cloud-based service for creating and managing machine learning solutions. It’s designed to help data scientists and machine learning engineers to leverage their existing data processing and model development skills & frameworks.

Azure Machine Learning provides the tools developers and data scientists need for their machine learning workflows, including:

  • Azure Compute Instances that can be accessed online or linked to remotely with Visual Studio Code.
  • Out of the box support for Machine Learning libraries such as PyTorch, Tensorflow, ScikitLearn and Keras.
  • Code, Data, Model Management
  • Scalable Distributed Training and Cheap Low Priority GPU Compute
  • Auto ML and Hyper Parameter Optimization
  • Container Registry, Kubernetes Deployment and MLOps Pipelines
  • Interpretability Tools and Data Drift Monitoring

You can even use external open source services like MLflow to track metrics and deploy models or Kubeflow to build end-to-end workflow pipelines.

Check out some AzureML best practices examples at

With the advantages of PyTorch Lighting and Azure ML it makes sense to provide an example of how to leverage the best of both worlds.

Getting Started

Step 1 — Set up Azure ML Workspace

Create Azure ML Workspace from the Portal or use the Azure CLI

Connect to the workspace with the Azure ML SDK as follows

from azureml.core import Workspace
ws = Workspace.get(name="myworkspace", subscription_id='<azure-subscription-id>', resource_group='myresourcegroup')

Step 2 — Set up Multi GPU Cluster

from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException

# Choose a name for your GPU cluster
gpu_cluster_name = "gpu cluster"

# Verify that cluster does not exist already
try:
gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC12s_v3',
max_nodes=2)
gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)

gpu_cluster.wait_for_completion(show_output=True)

Step 3 — Configure Environment

To run PyTorch Lighting code on our cluster we need to configure our dependencies we can do that with simple yml file.

channels:
- conda-forge
dependencies:
- python=3.6
- pip
- pip:
- azureml-defaults
- torch
- torchvision
- pytorch-lightning

We can then use the AzureML SDK to create an environment from our dependencies file and configure it to run on any Docker base image we want.

from azureml.core import Environment

env = Environment.from_conda_specification(environment_name, environment_file)

# specify a GPU base image
env.docker.enabled = True
env.docker.base_image = (
"mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04"
)

Step 4 — Training Script

Create a ScriptRunConfig to specify the training script & arguments, environment, and cluster to run on.

We can use any example train script from the PyTorch Lighting examples or our own experiments.

Step 5 — Run Experiment

For GPU training on a single node, specify the number of GPUs to train on (typically this will correspond to the number of GPUs in your cluster’s SKU) and the distributed mode, in this case DistributedDataParallel ("ddp"), which PyTorch Lightning expects as arguments --gpus and --distributed_backend, respectively. See their Multi-GPU training documentation for more information.

import os
from azureml.core import ScriptRunConfig, Experiment

cluster = ws.compute_targets[cluster_name]

src = ScriptRunConfig(
source_directory=source_dir,
script=script_name,
arguments=["--max_epochs", 25, "--gpus", 2, "--distributed_backend", "ddp"],
compute_target=cluster,
environment=env,
)

run = Experiment(ws, experiment_name).submit(src)
run

We can view the run logs and details in realtime with the following SDK commands.

from azureml.widgets import RunDetails

RunDetails(run).show()
run.wait_for_completion(show_output=True)

Next Steps and Future Post

Now that we’ve set up our first Azure ML PyTorch lighting experiment. Here are some advanced steps to try out we will cover them in more depth in a later post.

1. Link a Custom Dataset from Azure Datastore

This example used the MNIST dataset from PyTorch datasets, if we want to train on our data we would need to integrate with the Azure ML Datastore which is relatively trivial we will show how to do this in a follow up post.

2. Create a Custom PyTorch Lightning Logger for AML and Optimize with Hyperdrive

In this example all our model logging was stored in the Azure ML driver.log but Azure ML experiments have much more robust logging tools that can directly integrate into PyTorch lightning with very little work. In the next post we will show how to do this and what we gain with HyperDrive.

3. Multi Node Distributed Compute with PyTorch Lightining Horovod Backend

In this example we showed how to leverage all the GPUs on a one Node Cluster in the next post we will show how to distribute across clusters with the PyTorch Lightnings Horovod Backend.

4. Deploy our Model to Production

In this example we showed how to train a distributed PyTorch lighting model in the next post we will show how to deploy the model as an AKS service.

If you enjoyed this article check out my post on 9 tips for Production Machine Learning and feel free to share it with your friends!

Acknowledgements

I want to give a major shout out to Minna Xiao from the Azure ML team for her support and commitment working towards a better developer experience with Open Source Frameworks such as PyTorch Lighting on Azure.

About the Author

Aaron (Ari) Bornstein is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with the Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.