Identifying the Blood relatives with the help of deep learning

Source: Deep Learning on Medium

Identifying the Blood relatives with the help of deep learning

Blood relatives often share facial features. Now researchers at Northeastern University want to improve their algorithm for facial image classification to bridge the gap between research and other familial markers like DNA results.

Here I want to share How I used deep learning algorithms for Northeastern SMILE Lab Challenge .

The challenge is to build a deep learning technique to help researchers build a more complex model by determining if two people are blood-related based solely on images of their faces. As most of the blood related people have same features like eyes,nose ,face cut which we need to identify using the images.

Things to remember :

  1. There can be familial facial relationships exist which we might overlook this can be avoided using deep learning.
  2. Remember, not every individual in a family shares a kinship relationship. For example, a mother and father are kin to their children, but not to each other.

Data Overview :

The data is provided by Families In the Wild (FIW), the largest and most comprehensive image database for automatic kinship recognition. FIW obtained data from publicly available images of celebrities.

In the competition we are provided with 4 files.

  1. train-faces.zip → The training set is divided into families with each unique family id (F0123) as folder name and each family as members with each member having unique (MIDx) as folder name which contains the individual images of the member face.
  2. train.csv → this file contains the training labels of kinship relations.
  3. test-faces.zip → the test file contains face images of unknown individuals.
  4. sample_submission.csv → a sample submission file in the correct format. The column img_pair describes the pair of images, i.e., abcdef-ghijkl means the pair of images abcdef.jpg and ghijkl.jpg. Your goal is to predict if each pair of images in test-faces are related or not, where 1 means related and 0 means unrelated.

Metric :

Submissions are evaluated on area under the ROC curve between the predicted probability and the observed target. So we submit the probability for each relationship in sample_submission.csv.

Exploratory Data Analysis :

a) Understanding the family count:
We can see that only 1 family as 41 members and remaining family have less than 15 members

plot between the family and member count in each family
The minimum family count is : 1
The no of families with minimum family count are : 1
The maximum family count is : 41
The no of families with minimum family count are : 1

b) plotting the images of a person : as we can see the images are of different ages and different positions.

c) Plotting the number of images in folder :

Important things used for Optimization :

  1. Here the Data for train,cv and test split should be done on basis of family id’s and not on relationships because if we split based on relationship images it might create Data leakages as some features of a family are seen in train data itself.
  2. In the given data we are given only the relationship existing data so we need to create a list of image pairs which have no relationship to train the data this is also a reason for making a split based on family Id’s .We create a pair id which doesn’t have relationship in train data we assume it doesn’t have relationship.
  3. For training we used fit_generator because as the images are 224 X 224 size and taking all relationships into an array consumes a lot of RAM.So we created a generator which yield the pair of images and steps_per_epoch is taken as 200 so in each epoch generator yields a 200 pair images.

Data Preprocessing :

We have subtracted the R,G,B mean of all train images from each image’s R,G,B values for both training and testing.

I have used preprocess_input in this code which has hard coded mean values for R,G,B which were computed with very large data.

The below is the sample How images look after preprocessing.

Deep Learning models to Solve the problem :

  1. VGG Face models : It is describe in a paper by omkar parkhi in 2015 paper titled Deep Face Recognition. Implementation of VGG face is explained in this site.

a. Variation 1:

b. Variation 2 :

c) Variation 3 :

The VGG face is the best model which has higher auc compared to other models and in this competition we have used multiple variety of VGG Face models used.

2. Siamese based model : It is a one shot image recognition technique. This model is described in the paper titled Siamese Neural Networks for One-shot Image Recognition. Implementation of Siamese network is mentioned Here.

In this competition we can’t directly use the Siamese model so the model architecture is taken inspiration from Siamese model .

a) Variation 1 :

b) Variation 2 : There is another model for which we changed steps_per_epoch=100 in fit_generator.

3. Inception based model : It is one of the best face recognization technique.It is described in a 2014 paper title Going Deeper with Convolutions. Implementation of Inception network is explained in this site.

Objectives of Inception network :

  • Bigger the model, more prone it is to over fitting. This is particularly noticeable when the training data is small
  • Increasing the number of parameters means you need to increase your existing computational resources

We got very less public and private score in Inception network so we have not included the inception network in the Ensemble models.

Models score :

Ensemble Models for Kaggle Submission :

We should not add the models which have less accuracy or score . I have kept a threshold of 0.81 so I have taken the models which meet this criteria.

pdf plot of top 5 models predictions

The average of output results of 5 models are taken and submitted from which we can got the private score of 0.911 .

The full link to the code can be found here.

Future Work:

  1. We can add few more distinct models and take the ensemble output of them But they should maintain the criteria.
  2. We can Ensemble the models by giving different weights to each model.

Sources :

  1. https://www.kaggle.com/c/recognizing-faces-in-the-wild/overview
  2. https://www.analyticsvidhya.com/blog/2018/10/understanding-inception-network-from-scratch/
  3. https://machinelearningmastery.com/how-to-perform-face-recognition-with-vggface2-convolutional-neural-network-in-keras/