Source: Deep Learning on Medium
Preprocessing Layer in CNN models
This is first machine learning tip that we learned during the development of ximilar.com. It could save you lot of time and bugs. One of the most common mistake which are novice machine learning practitioners doing is forget to normalize input images. No wonder! Every model requires different input normalization when you are doing Transfer Learning. VGG for example requires to subtract this vector [123.68, 116.779, 103.939] from RGB image. MobileNetV2 requires inputs from interval <-1,1>. PyTorch models often use even different normalization.
With old TensorFlow 1 (rest in peace) you could simply add preprocessing operation to the graph and freeze this model. However in TensorFlow 2+ it is not so easy! You need to create your own preprocessing layer.
So first define our preprocess method:
Then create your custom layer inheriting from tf.keras.layers.Layer and use the function in call method on the input:
When creating model then insert the layer before calling base model of already trained model:
And that’s it!
From now, your model always accepts just RGB [0, 255] images and the normalization of input is done inside the model. If you saved such model in keras .h5 format then do not forget to specify custom objects during the model loading. However the SavedModel and even TFLITE format should be fine!
You saved lot of debugging hours of other people using your model and wondering why it is not working!
See you next time!
Michal from Ximilar