Deploy your deep learning models on IoT devices using TensorFlow lite

Original article can be found here (source): Deep Learning on Medium

I guess this information is enough to get yourself started. So what’s the flow, how to install, convert and deploy my models?

Just refer to the steps and sample snippets.

  1. First, choose your preferred model, Train, test and validate your model on your high-end GPU/CPU systems. Below is a screenshot of my Fire Detection Model which I coded using Keras (for your reference).
#Keras code -- model layers(reference Code)
use tf 1.x version or tf 2.x version(here used)
import tensorflow as tf
from tf.keras.datasets import cifar10
from tf.keras.preprocessing.image import ImageDataGenerator
from tf.keras.models import Sequential
from tf.keras.layers import Dense, Dropout, Activation, Flatten
from tf.keras.layers import Conv2D, AveragePooling2D
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(3, 3), activation='relu', input_shape=X.shape[1:]))
model.add(AveragePooling2D())
model.add(Dropout(0.5))
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu'))
model.add(AveragePooling2D())
model.add(Dropout(0.5))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(AveragePooling2D())
model.add(Dropout(0.5))
model.add(Flatten())model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(units=128, activation='relu'))model.add(Dense(units=2, activation = 'softmax'))model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
Model Training

2. Next, when the model is ready, save it using the Keras inbuilt feature of saving models. I have saved as .h5 format, you can also save in .pb format.

model.save('TrainedModels/Fire-64x64-color-v7.1-soft.h5')

3. Now comes the role of the TensorFlow-lite converter API. Call the tfLiteConverter to convert the saved model to the flite version and save it. Based on your version of TensorFlow check the code.

#Converting tf.keras models#tensorflow 1.X implementationimport tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file('/content/Fire-64x64-color-v7-soft.h5')
tfmodel = converter.convert()#saving tflite model
open("fire_lite_model.tflite","wb").write(tfmodel)
#tensorflow 2.X implementation
import tensorflow as tf
model = tf.keras.models.load_model('C:/Users/samar/fireDet/Fire-64x64-color-v7-soft.h5')# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
#save tflite model
tflite_model = converter.convert()
open("model.tflite","wb").write(tflite_model)

For converting saved models, Frozen_graph models, please check this https://www.tensorflow.org/lite/r1/convert/python_api

After saving the models if we compare the models’ size, The ‘.tflite’ size is less than half the size of the original model. You may realize this size difference when going for deeper models for eg. YOLO, RCNN or Resnet50, in such cases the model size can go more than hundreds of megabytes and you cannot give such load to your microprocessors.

TFLITE SIZE Vs ORIGINAL FILE (H5) SIZE