Using the new Tensorflow 2.x C++ API for object detection (inference)

Original article was published by Raktim Bora on Artificial Intelligence on Medium


Using the new Tensorflow 2.x C++ API for object detection (inference)

Serving Tensorflow Object Detection models in C++

Photo by Anoir Chafik on Unsplash

In a recent project, I had to integrate an object detection model into an existing C++ application code base for inferencing. This meant creating a C++ inferencing wrapper for our model which was trained in Python and serialized as TensorFlow binary checkpoints. We were on the latest stable release v2.3 of Tensorflow and were also using the Tensorflow Object detection API v2 which was recently upgraded to be compatible with Tensorflow 2.x.

This seemingly straight forward task took me a good few days and raising a couple of issues on Tensorflow Github repo, mainly because building Tensorflow C++ API from source is still painful and the lack of documentation around the usage of C++ API means you have to figure out things the hard way. If you are attempting the same, this article might save you some time and agony.

Oh btw, this article is mostly code, so you can view them directly on Github if that’s the sort of the thing you like.

Game plan

  1. Build the latest Tensorflow C++ API from source (tested with v2.3.0) using docker.
  2. Load a SavedModel using SavedModelBundle
  3. Serve prediction using the new ClientSession method (vs the old Session way)

Checking thrusters

A few things though before we start

  1. Use docker. I cannot stress this enough! You don’t want to sabotage your existing native TensorFlow installation, especially with those nasty CUDA and cuDNN version compatibility issues. But if you want, you could just use the steps in the Dockerfile to build the API from source natively.
  2. You might still want to develop locally without having to compile your cpp code against the container. To achieve this, I built the C++ API from source using the exact CUDA and cuDNN versions of my native Tensorflow installation but inside a container. Once built, I simply transferred the header and library files to the host system and made them accessible to Tensorflow and Visual Studio Code.
  3. We demonstrate the example using the Tensorflow Object Detection API v2 and a pre-trained EfficientDet-D3 model. Get the model from the TF model zoo. But, you could use any model you want as long as it’s in the SavedModel format.

Liftoff

Grab the docker image from dockerhub

docker pull borarak/tensorflow_v2_cpp

Here is the Dockerfile if you wish to build it yourself

Orbit

The cpp code is pretty self-explanatory, divided into a saved_model_loader.h header file (where most of the action is) and a get_prediction.cpp file which is the actual interface for inferencing.

Let’s wrap up with some Doggy detection

Details on compiling the code are in the Github readme

Original image from Unsplash: https://unsplash.com/photos/2_3c4dIFYFU

We detected the doggies which btw, was always the plan.

Houston, mission successful!