MLOPS, Integrating ML with DevOps

Original article was published on Deep Learning on Medium

MLOPS, Integrating ML with DevOps

Project Description

1. Create a container image that has Python3 and Keras or numpy installed using Dockerfile

2. When we launch this image, it should automatically start to train the model in the container.

3. Create a job chain of job1, job2, job3 and job4 using build pipeline plugin in Jenkins

4. Job1: Pull the Github repo automatically when some developers push the repo to Github.

5. Job2: By looking at the code or program file, Jenkins should automatically start the respective machine learning software installed interpreter install image container to deploy code and start training( eg. If code uses CNN, then Jenkins should start the container that has already installed all the software required for the CNN processing).

6. Job3: Train your model and predict the accuracy or metrics, if metrics accuracy is less than 80%, then tweak the machine learning model architecture and retrain the model.

7. Job4: Notify that the best model is being created

8. Create One extra job5 for monitor: If the container where the app is running. fails due to any reason then this job should automatically start the container again from where the last trained model left

Per-requisites for the practical :

  1. Docker Installed and Configured on Your System.
    → 1.1 centos image downloaded on your system.
  2. Git Installed on Your System
  3. ngrok ( to create a tunnel for the network to connect IP’s outside the Private Network to local running service ).
  4. A Github Repository should be there to commit the code or cloned in the local system.
  5. Create a post-commit hook in .git/hooks folder in Git for git push so that if a developer commits code then it automatically pushes to Github.
  6. Jenkins should be configured on your base O.S. with build pipeline plugin installed and email service configured.

Note: Preferably, Perform the tasks on Linux Machine for smooth functioning and seamless installations ( I am using RHEL-8).

Stepwise Implementation

STEP 1: Create a container image that has Python3 and Keras or numpy installed using Dockerfile

1. Create a workspace for your project and inside that workspace create a Dockerfile using centos for traditional ML image which contains libraries like scikit-learn, tensorflow, etc.

2. Create another workspace for your project and inside that workspace create a Dockerfile using centos for Deep Learning image which contains libraries like keras, tensorflow, etc.

The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions.

The RUN instruction will execute any commands in a new layer on top of the current image and commit the results.

3. Creating an image by building the Dockerfile

$ docker build -t env_ml:v1 /root/Desktop/mlops_task1/ml_Dockerfile/
$ docker build -t env_dl:v1 /root/Desktop/mlops_task1/dl_Dockerfile/

env_ml:v1 & env_dl:v1 are the name of the image you want to give with some version

ml_Dockerfile/ & dl_Dockerfile are the workspaces where Dockerfile are stored

STEP 2: Create a Job1 which will pull the Github repo automatically when some developers push the repo to Github.

1. Selecting SCM as Git and providing the URL of GitHub repo will help this job to pull the code from GitHub repo