Image Classification using Twilio & RedisAI

Source: Artificial Intelligence on Medium

AIM:

We want to build something similar to the below screenshot where we will accept image from the user and return labels.

Requirements:

Let’s build:

Step 1: Installation for Redis

Download, extract and compile Redis with:

$ wget http://download.redis.io/releases/redis-5.0.7.tar.gz
$ tar xzf redis-5.0.7.tar.gz
$ cd redis-5.0.7
$ make

The binaries that are now compiled are available in the src directory. Run Redis with:

$ src/redis-server

You can interact with Redis using the built-in client:

$ src/redis-cli
redis> set foo bar
OK
redis> get foo
"bar"

Step 2: Installation of RedisAI

Before going for installation of RedisAI let’s first understand what is RedisAI?

So, what is RedisAI? In a nutshell, it is a new option for productionizing deep learning models, born from a collaboration between [tensor]werk and RedisLabs. It represents an opportunity to strike a new balance between operational simplicity, industrial-grade reliability and the ability to serve small applications up to large deployments at scale.

RedisAI is built as a module that can be loaded into a Redis server with the --loadmodule switch. With the RedisAI module, Redis can store another data type, the Tensor (another word for multi-dimensional arrays). For the C-savvy readers, tensors in Redis are represented as DLPack structs, which is an RFC for a common in-memory tensor structure and operator interface for deep learning systems. Here’s what a DLPack tensor looks like:

typedef struct {
void* data;
DLContext ctx;
int ndim;
DLDataType dtype;
int64_t* shape;
int64_t* strides;
uint64_t byte_offset;
} DLTensor;

A user can send multidimensional arrays to RedisAI using the Redis client and store them in Redis as DLPack tensors.

Let’s start with installation of RedisAI:

To quickly tryout RedisAI, launch an instance using docker:

docker run -p 6379:6379 -it --rm redisai/redisai

For docker instance with GPU support, you can launch it from tensorwerk/redisai-gpu

docker run -p 6379:6379 --gpus all -it --rm redisai/redisai:latest-gpu

But if you’d like to build the docker image, you need a machine that has Nvidia driver (CUDA 10.0), nvidia-container-toolkit and Docker 19.03+ installed. For detailed information, checkout nvidia-docker documentation

docker build -f Dockerfile-gpu -t redisai-gpu .
docker run -p 6379:6379 --gpus all -it --rm redisai-gpu

Building RedisAI from source:

This will checkout and build and download the libraries for the backends (TensorFlow, PyTorch, ONNXRuntime) for your platform. Note that this requires CUDA 10.0 to be installed.

bash get_deps.sh

Alternatively, run the following to only fetch the CPU-only backends even on GPU machines.

bash get_deps.sh cpu

Once the dependencies are downloaded, build the module itself. Note that CMake 3.0 or higher is required.

mkdir build
cd build
cmake ..
make && make install
cd ..

Note: in order to use the PyTorch backend on Linux, at least gcc 4.9.2 is required.

Running the server

You will need a redis-server version 4.0.9 or greater. This should be available in most recent distributions:

redis-server --version
Redis server v=4.0.9 sha=00000000:0 malloc=libc bits=64 build=c49f4faf7c3c647a

To start Redis with the RedisAI module loaded:

redis-server --loadmodule install-cpu/redisai.so

Now we’re ready with one part of our project i.e. installing RedisAI and getting server ready with loading this module. Let’s work on second part now!

Step 3: Object Recognition using RedisAI [TensorFlow + Imagenet]

import json
import time
import redisai as rai
from ml2rt import load_model, load_script
from skimage import io
from cli import arguments
def predict_object():
if arguments.gpu:
device = rai.Device.gpu
else:
device = rai.Device.cpu
con = rai.Client(host=arguments.host, port=arguments.port)tf_model_path = 'models/tensorflow/imagenet/resnet50.pb'
script_path = 'models/tensorflow/imagenet/data_processing_script.txt'
img_path = 'images/x.png'
class_idx = json.load(open("data/imagenet_classes.json"))image = io.imread(img_path)tf_model = load_model(tf_model_path)
script = load_script(script_path)
out1 = con.modelset(
'imagenet_model', rai.Backend.tf, device,
inputs=['images'], outputs=['output'], data=tf_model)
out2 = con.scriptset('imagenet_script', device, script)
a = time.time()
tensor = rai.BlobTensor.from_numpy(image)
con.tensorset('image', tensor)
out4 = con.scriptrun('imagenet_script', 'pre_process_3ch', 'image', 'temp1')
out5 = con.modelrun('imagenet_model', 'temp1', 'temp2')
out6 = con.scriptrun('imagenet_script', 'post_process', 'temp2', 'out')
final = con.tensorget('out', as_type=rai.BlobTensor)
ind = final.to_numpy().item()
return class_idx[str(ind)]

Above code snippet will accept the image, load the model and predict the object from image.

Step 4: Setup your Twilio account and write a Twilio flask wrapper to host this application

We have to write a code that can accept image from Twilio WhatsApp API and save it at one particular location in our file system from where it can be accessed by our RedisAI script and and it can return identified objects as a message.

below script will do that:

import requests
from flask import Flask, request, redirect
from twilio.twiml.messaging_response import MessagingResponse
from tensorflow_imagenet import predict_objectDOWNLOAD_DIRECTORY = '../twilio-demo/images'
app = Flask(__name__)
@app.route("/sms", methods=['GET', 'POST'])
def sms_reply():
"""Respond to incoming with a simple text message."""
resp = MessagingResponse()if request.values['NumMedia'] != '0':# Use the message SID as a filename.
filename = 'x.png'
with open('{}/{}'.format(DOWNLOAD_DIRECTORY, filename), 'wb') as f:
image_url = request.values['MediaUrl0']
f.write(requests.get(image_url).content)
label = predict_object()
resp.message(label)
print(label)
return str(resp)
if __name__ == "__main__":
app.run(debug=True)

Now you have to generate an endpoint which can be accessed using Twilio WhatsApp Sandbox.

Your Flask app will need to be visible from the web so Twilio can send requests to it. Ngrok lets us do this. With it installed, run the following command in your terminal in the directory your code is in. Run ngrok http 5000 in a new terminal tab.

Running Flask APP
Getting public URL

Grab that ngrok URL to configure twilio whatsapp sandbox. We will try this on WhatsApp! So let’s go ahead and do it (either on our Sandbox if you want to do testing or your main WhatsApp Sender number if you have one provisioned). In this example we show the Sandbox page:

And we’re good to go! Let’s test our application on WhatsApp! We can send images on this sandbox and get predicted labels in return if everything works as expected!

Hurray! It works!

Conclusion:

RedisAI is a really efficient and great redis module. You already learned enough about RedisAI and now it’s time to go and explore RedisAI by yourself if you’re interested. Whether you work on computer vision, natural language processing or using any other deep neural networks or traditional ML algorithms: if you have a model that you need to serve in production, chances are, RedisAI can make a difference in the way you operate and might make your life easier. [And Twilio always helps in creating cool demos!😊]