Deploying a Flask Container for Helmet Detection by using Jenkins

Original article was published by Mohitjangir on Deep Learning on Medium

Deploying a Flask Container for Helmet Detection by using Jenkins

Head injuries are a major cause of death, injury and disability among users of motorized two wheel vehicles.The lack or inappropriate use of helmets has been shown to increase the risk of fatalities and injuries resulting from road crashes.Use of helmets has been shown to reduce fatal and serious head injuries by between 20% and 45% among motorized two-wheelers users.

We have developed a web application which gives a count of people wearing and not wearing helmets.Internally the web app uses YOLO Object Detection Algorithm.

Overview of the YOLO Algorithm:

YOLO (You Only Look Once) is a real-time object detection algorithm, which is one of the most effective object detection algorithms.Object detection is a problems in computer vision where we have to recognize what and where i.e. what objects are inside a given image and also where they are in the image.

This algorithm works by performing only one forward pass, a single CNN simultaneously predicts multiple bounding boxes and class probabilities for those boxes. YOLO trains on full images and directly optimizes detection performance.

Darknet is a framework to train neural networks, it is open source and written in C/CUDA and serves as the basis for YOLO.

Clone the repo locally. To compile it, run a make. But first, if we intend to use the GPU capability we need to edit the Makefile in the first two lines, where we tell it to compile for GPU usage with CUDA drivers.
The repo comes with multiple configuration files for training on different architectures. We can use it immediately for detection by downloading some pre-trained weights people have created.


We can use the above command to download the pretrained weights.These weights have been trained on the COCO dataset, with the 80 classes specified in this list.

Role of Devops:

Problems with AI models :

1] Difficulty with deployment: Data science teams are using a variety of ML platforms, languages, and frameworks that rarely produce production-ready models.

2] Constant updates: These applications have a complex lifecycle, including frequent updates that, when done manually, are time-consuming. Model updates also require significant production testing and validation to maintain production model quality.

DevOps manages the dynamic nature of machine learning applications with the ability to frequently update models, including testing and validation of new models. Update models on-the-fly while continuing to serve business applications. It also provides advanced monitoring softwares which can give us various data drift analysis, model-specific metrics, alerts etc.

In this project we are going to use two common devops tools:
1] Docker
2] Jenkins

What is Docker ??

Docker container provide a way to get a grip on software. We can use it to wrap up an application in such a way that its deployment and runtime issues — how to expose it on a network, how to manage its use of storage and memory and I/O, how to control access permissions — are handled outside of the application itself.

We can run Docker containers on any OS-compatible host (Linux,Windows or MacOs ) that has the Docker runtime installed.

What is Jenkins ??

It is an open-source automation tool written in Java with plugins built for Continuous Integration purposes.

Continuous Integration :

  • First, a developer commits the code to the source code repository. Meanwhile, the Jenkins server checks the repository at regular intervals for changes.
  • Then Jenkins server detects the changes that have occurred in the source code repository. Jenkins will pull those changes and will start preparing a new build.
  • If the build fails, then the concerned team will be notified.
  • If built is successful, then Jenkins deploys the built in the test server.
  • It will continue to check the source code repository for changes made in the source code and the whole process keeps on repeating.

Using all the above tools we are going to build the project !!!!!!


1] Data Preparation :

As with any deep learning task, the first most important task is to prepare the dataset . So for this project we need to collect images of various two-wheelers users.

To do so we have used google images downloader extension from Chrome Web store . This extension downloads all the images in the given web page and saves them as a zip file.

We have downloaded around 350 images.

For training a YOLO model on our custom dataset we need to provide images with their respective annotation file ( co-ordinates of the object we want to detect ).Each of the file must be of the below form.

For annotation purposes we have used a tool called LabelImg.

This is simple tool used for graphically labeling images. It’s written in Python and uses QT for its graphical interface. It’s an easy, free way to label a few hundred images.

In this way we have created boxes around each of the class we want the model to detect and after saving we get a text file containing the required class id and the co-ordinates.

In this way we had done it to all the images.

Alternatively we could download our required dataset from Open Images Dataset V6.

2] Training the model

To train the model we have followed a youtube video.

Changed all the config files according to our requirements and started the training process.

We have trained the model for 910 epochs and at this is point the model had a loss of only 1.8.

This has taken around 4 hours

3] Creating a Flask Application

Flask is a web framework which provides us with tools, libraries and technologies that allow us to build a web application. This web application can be some web pages, a blog, a wiki or go as big as a web-based calendar application or a commercial website.

For the project we built a flask app which accepts an image from the user and then gets the predictions from the yolo model to display the results on a new page.

This app shows the image with all the people wearing and not wearing helmets by drawing bounding boxes.Then shows the count of the respective classes.

Now comes the part of DevOps

4] Jenkins JOB-1

Whenever the developer commits new code to github this job is triggered.

Poll SCM is used to monitor the github repo periodically

5] Jenkins JOB-2

The container is created and “” file is executed.This job is executed only when JOB-1 is stable.

6] Jenkins JOB-3

This job is purely for monitoring purpose.It is built periodically and checks whether our container is running our not . In case the container is down it launches a new container and also notifies the developer.

Screenshots of the Website

1] Choose the image :

2] Click Predict :

3] Click Get-result :

Testing another image :

We had also tried with videos but the detection time was very high per frame.The model has taken around 2 sec/frame.

Future Scope

In the near future when camera quality gets better and cost gets cheaper we can also detect the number plates for sending a notice and penalty to the person who isn’t wearing safety gear while driving. This can easily be automated, eliminating the need of traffic police to focus on more daunting tasks.

Github link:


This is just a basic setup to integrate Deep Learning models with DevOps.We can use another popular tool called “Kubernetes”. Kubernetes has the capability to monitor the containers and also scale them when the load increases.In today’s world many companies are using cloud services to deploy their applications and maintain them with zero downtime.