Face Recognition using Transfer Learning on MobileNet

Original article was published on Deep Learning on Medium

Face Recognition using Transfer Learning on MobileNet

Today, I going to use the Transfer Learning concept to demonstrate how transfer learning can be done on a pre-trained model ( here, I am using MobileNet)to save our computational power and resources.

  1. Creating a dataset.

As, I have to do my facial recognition, so doing manually all that cropping and resigning part and then storing it to a folder a use one script which in a single go creates as many as images you want and also according to the respective size.

import cv2
import numpy as np
# Load HAAR face classifier
face_classifier = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)
# Load functions
def face_extractor(img):
# Function detects faces and returns the cropped face
# If no face detected, it returns the input image

gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = face_classifier.detectMultiScale(gray, 1.3, 5)

if faces is ():
return None

# Crop all faces found
for (x,y,w,h) in faces:
cropped_face = img[y:y+h, x:x+w]
return cropped_face# Initialize Webcam
cap = cv2.VideoCapture(0)
count = 0
# Collect 100 samples of your face from webcam input
while True:
ret, frame = cap.read()
if face_extractor(frame) is not None:
count += 1
face = cv2.resize(face_extractor(frame), (300, 300))
# Save file in specified directory with unique name
file_name_path = r”C:\Users\Shinchan\Music\Akashdeep” + str(count) + ‘.jpg’
cv2.imwrite(file_name_path, face)
# Put count on images and display live count
cv2.putText(face, str(count), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
cv2.imshow(‘Face Cropper’, face)

else:
print(“Face not found”)
pass
if cv2.waitKey(1) == 27 or count == 100: #27 is the Esc Key
break

cap.release()
cv2.destroyAllWindows()
print(“Samples Taken”)

Here, I have created 100 samples of size 300×300 for testing and some more dataset of different peoples that are uploaded in the GitHub repository, link given at the end of the article.

Here ”C:\Users\Shinchan\Music\Akashdeep” is the path where the samples will be stored. You can use your own location.

2. Now, we will use the concept of transfer learning on the model MobileNet.

We will load the pre-trained model, you can either download it locally or it will download automatically from the internet.

These are the list of pre-trained layers that had been used in this model by default. As you can see all the pre-trained layers are set as True means that they are ready to be trained again.

Now, we will set all the pre-trained layers to False so that they can’t be trained again.

2. Now, we will add the layers for the input that we want to train.

3. Adding all the layers to the model and printing the summary of the model.

4. Now, checking the testing and validation dataset that we have given into the model and also modifying them.

Here, we use the ImageDataGenerator module to do all the things.

ImageDataGenerator accepts an input batch of images, randomly transforms the batch, and then returns both the original batch and modified data

5. Now, we will train are model and saving this model into a specific file extension “.h5”.