Face Recognition using VGG16

Original article was published on Deep Learning on Medium

Face Recognition using VGG16

In this article we will see face recognition using VGG16 by using the concept of transfer learning.

Transfer learning is about “transferring” the learnt weights to another problem. We use the model developed for one task in another similar task.

VGG16 is an advance CNN algorithm. We will load the pre-trained VGG16 model. VGG works on the data images of 224 * 224 pixels. Here we will be using the weights of imagenet.

Then we freeze the layers.

Now we will create a function called new in which we will define the new layers to be added to the existing model.

Now we have to add the layers we created in the previous model. Here number of classes is 3 because we are using 3 faces for classification.

We can see our layers are added.

Now we will define the path where the training and validation data are present.

Then we will do some data augmentation as we have less data.

train_generator will take the input from the directories and resize them according to VGG. Similarly validation_generator will do.

Here we are using PMSprop as the optimizer.

we will use Model Checkpoint to save the model. Only the best model will be selected according to value loss.

We will use earlystop so that if the value of loss is not decreasing for continuous 3 epochs then stop further epochs.

Then we compile the model.

We have specified the number of epochs as 5 and batch_size as 4 to improve accuracy.

Last epoch gave us the accuracy of almost 89% and also the loss function is reduced so we save the model.

Now we will load the model.

We create two dictionaries actors_dict and actors_dict_n as mentioned which specifies classes and from where to take them.

We will make a function draw_test which will show the output in the window using opencv.

Another function is created to take random images from the folder.

Finally we get the output.

OUTPUT