Original article was published on Deep Learning on Medium


1. Create container image that’s has Python3 and Keras or numpy installed using dockerfile

2. When we launch this image, it should automatically starts train the model in the container.

3. Create a job chain of job1, job2, job3, job4 and job5 using build pipeline plugin in Jenkins

4. Job1 : Pull the Github repo automatically when some developers push repo to Github.

5. Job2 : By looking at the code or program file, Jenkins should automatically start the respective machine learning software installed interpreter install image container to deploy code and start training( eg. If code uses CNN, then Jenkins should start the container that has already installed all the softwares required for the cnn processing).

6. Job3 : Train your model and predict accuracy or metrics.

7. Job4 : if metrics accuracy is less than 80% , then tweak the machine learning model architecture.

8. Job5: Retrain the model or notify that the best model is being created

9. Create One extra job job6 for monitor : If container where app is running. fails due to any reason then this job should automatically start the container again from where the last trained model left


  1. This is my dockerfile that has almost all the libraries required for machine learning/deep learning included.

2. When we build and run this image it will automatically start machinelearning/deeplearning emvironment using the below command.My dockerfile is in tech folder.

docker build -t deeplearning:v1 /tech/

NOTE: If someone is facing this kind of issue while building image: then i suggest to turn off the network and then again turn it on. It worked for me.

Successfully build the image

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

3.A beautiful output using build Pipeline

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —



This is a staright forward task to download the code pushed by developer on github to jenkins workspace.

developer pushed the code


Since in my dockerfile almost all the libraries are included so it’ll launch only one environment. But for testing i’ve created three files named: as as simple numpy code, and as cnn code.

Done Keyword searching with the help of cat command.


Run job3 as shown:

It will both run at the background and execute the code at the same time. Here i’ve mounted my code which is in my host OS to docker container with -v command.


I’ve done both the jobs in a single job. This was the toughest job among all of them . I tried to automate it as much as i can but the only manual part which i couldn’t automate is described below:

NOTE: I’ve set the threshold of 98% since in the worst case my code was giving 97% accuracy.

The only manual part in this task is that when the accuracy is less than 98% then i’ve to manually copy the code to jenkins workspace. If it’s greater than 98% then it’ll print the accuracy .

When accuracy < 98%

When the above code is executed then the code will get copied in jenkins workspace and now we will tweak it . For tweaking the model i’ve downloaded a plugin Text File Operations where i’ll overwrite my existing code with the some modified code which will add the Convolution layer and Pooling layer.

x is responsible for adding the layers

After this i’ve to push this modified code to github which will automatically trigger JOB1(because of the changes in code). The git commands are executed in the shell itself. Git Publisher is responsible for pushing this code to master branch.


This job’s duty is to monitor : If container where app is running. fails due to any reason then this job will automatically start a new container