Lunar Rock Classification

Source: Deep Learning on Medium

Image source: https://www.kaggle.com/romainpessia/understanding-and-using-the-dataset-wip

Lunar Rock Classification

My warm welcome to all the readers!

Lunar Rock Classification is a competition posted in Hacker Earth– An online platform for Hackathon, coding challenges and competitions.

Content source: https://www.hackerearth.com/challenges/competitive/lunar-rock-hackerearth-data-science-competition/machine-learning/lunar-rock-recognition-43274e07-04533c43/

Objective:

Objective of this competition (Lunar Rock Classification) is to classify whether lunar rock belongs to class Small or Large.

FYI:

Soon I will upload complete code (IPython notebook) in GitHub and will update the link here.

You can skip the theory part and directly jump to ‘Step-by-step procedure for Lunar Rock classification’.

Short introduction to below topics:

  1. OpenCV
  2. Deep Learning
  3. Convolutional Neural Network

We will use OpenCV to deal with images and Deep Learning to build model with Convolutional Neural Network which is then used to predict if lunar rock belongs to class Small or Large.

1. OpenCV:

OpenCV ( Open Source Computer Vision) is a library used for computer vision i.e to deal with both gray scale images and color images, and also videos. It is used for object detection, object recognition, object tracking, videos, etc. Since we are working with images, we will talk little bit about images. Computer can not detect or read images as we do. So, images are fed in the form of an array which contains pixel values ranging from 0 to 255. Pixel values can be normalized by dividing each pixel value by 255 which brings pixel value range between 0 and 1.

Gray scale image is a two dimension image (x, y) where ‘x’ represents pixel width and ‘y’ represents pixel height. It is a ‘black and white’ image which contains shades of black and white. It has only to 2 channels i.e ‘black’ and ‘white’. If the intensity is 0, it gives white color and if the intensity is 255 (not normalized), it gives black color.

Color image is a three dimension image (x, y, 3) where ‘x’ represents pixel width, ‘y’ represents pixel height, and ‘3’ represents 3 color channels- RGB (Red-Green-Blue) also known as additive colors which gives various other colors with different combination of these three (RGB) colors.

2. Deep Learning:

Deep Learning is a sub-field of Machine Learning. Idea of deep learning is taken from the human brain neural connection. Internal structure of algorithms of deep learning looks similar to human brain neural connection which is called as ‘Neural Network’. For Lunar Rock Classification, we will use Convolutional Neural Network (CNN) which works very well with the images.

3. Convolutional Neural Network (CNN):

CNN, internally, has a structure like neural network. In CNN, many inputs — data points (x1, x2, x3…..xn) are given to each neuron along with weights (w1, w2, w3,…..wn) and bias (b1). Each neuron then performs the dot product of data points (x1, x2, x3,…..xn) and weights (w1, w2, w3,…..wn), and sums up with the bias (b1). This sum of dot product of data points— weights and bias is then passed over activation functions like ‘relu’, ‘softmax’, ‘sigmoid’, etc, and gives the required output.

*** Step-by-step procedure for ‘Lunar Rock’ classification:

*** Dataset:

Dataset can be downloaded from the competition in hackerearth. Dataset is in zip format which contains 5 folders- train and test folders with images; train.csv and test.csv files, and sample submission file. Train folder consists of 2 sub-folders — Large and Small — which contains classified images. Test folder contains images which we need to predict and classify it as either class Large or Small.

*** Exploring Images:

Image ids from ‘..Train Images/Large’, ‘..Train Images/Small’ and ‘..Test Images’ are extracted and stored in a list. We will use this list full of image ids to import images for further analysis, model preparation and prediction.

Sample code:

Code to store image ids in a list

*** Sample Train Data Images:

From the below images, we can see 4 colors — blue, green, black and red. Blue represents the large rock, green represent the small rocks, red represents the sky and remaining is black.

Lunar Rock Images

*** Train Dataframe:

D

On the left side, head of dataframe is from the train.csv file which contains 2 features Image_File and Class. Image_File feature contains image ids of train/large and train/small. Class feature contains 2 unique variables — Large and Small. Large and Small represents the class of images whether the image is large or small.

*** Class Count

Value counts of Class

From the left, we can analyse count of each class (Large/Small) both numerically and graphically.

Numerically, 5999 images belongs to class Large and 5999 images belongs to class Small. It is perfectly balanced data.

Also, graphically, we can see that count of both class images are equal. Blue bar representing the class Large and orange bar representing the class Small. Height of both bar is same from which we can analyse that both classes are perfectly balanced data.

*** Image Data Generator (ImageDataGenerator):

Sample code for ImageDataGenerator

Keras has Image Data Generator package which generates customized image out of existing images. Supposing, we have an image and we want to generate new image out of existing image, and we can do it by rotating the image by certain degree, shifting width and height by certain value, zoom-in and zoom-out by certain value, horizontal/vertical flipping images, etc. By setting these parameters, newly customized images are generated and will be fed to the model. This helps in better reading and understanding of images from various edges and angles. We will see how image is generated out of existing image in the below image.

Original image (left) and generated image (right)

Let us compare original image (left) and generated image (right). Original image have 2 small rocks (green color) with inclined slope towards left side. Generated image has 2 small rocks (green color) with slope inclined towards right side. This show image is generated with the defined parameters.

*** Flow From Directory (flow_from_directory):

Image Data Generator has a method called ‘flow_from_directory’. This method is used to extract images from the sub-folders. It should be noted that, when using this method, directory should contain at least one sub-folder from which images can be extracted and fed into model. For instance, we have a directory ‘train’ and it has 2 sub-fold — ‘cat’ and ‘dog’. We will pass ‘train’ directory in ‘flow_from_directory’. It will then extract images separately from sub-folders ‘cat’ and ‘dog’ (classified manner).

‘flow_from_directory’

Here, flow_from_directory points to …Train Images directory which has 2 more sub-folders — ‘Large’ and ‘Small’. From the output, we can see n number of images belonging to 2 classes (Large and Small).

  • target_size: Size of the input image. In the above case, size is (480, 480).
  • batch_size: Size of the batches of data.
  • class_mode: It is set to ‘binary’ as we have binary label.

*** Class Indices (class_indices):

class_indices

It outputs the indices for the labels. Indices for label Large is 0 and for label Small is 1.

FYI: For documentation on ImageDataGenerator, flow_from_directory and class_indices, please go through Keras Documentation.

*** Building Conv2d Model:

Building Conv2d neural network with 3 layers (feature learning layers) and 2 fully connected layers. Parameters I have used are:

  • kernel_size — (3,3)
  • Activation — ‘relu’
  • Activation at output — ‘sigmoid’
  • kernel_regularizer — regularizers.l2(0.01)
  • max pool_size — (2,2)
  • Dropout — 0.3
  • loss — ‘binary_crossentropy’
  • Optimizer — ‘adam’

*** Conv2d Network Architecture:

Conv2d Network Architecture

*** Fit Generator (fit_generator):

Fitting model with fit_generator

We will now train the model by fitting the train data using ‘fit_generator’.

  • train_img_gen: ‘n’ number of images called from directory using ‘flow_from_directory’.
  • epochs: 20 (Number of times model runs).
  • steps_per_epoch: 100 (Number of steps trained at every epoch).
Model training

From left, we can see how model was trained i.e training 100 steps for every epoch up to 20 epochs. Loss and accuracy is increasing and decreasing at different epochs. At 15th epoch, loss is less when compared to loss at other epochs i.e 0.1018 and accuracy is higher when compared to accuracy at other epochs i.e 0.9919 (99.19%)

*** Predicting Test Data Images:

Now its time to predict test data images whether the image belongs to class ‘Large’ or class ‘Small’.

*** Sample Test Data Images:

Below are the sample test data images.

Test Data Images

*** Test Dataframe:

Let us have a look at head of test dataframe. It consists of 2 features (columns) — Image_File and Class. Image_File feature consists of image ids while Class consists of null values (NaN). We need to predict test data images and fill the Class feature with predicted class.

*** Preparing Test Data Images:

Test data images are resized then converted images to numpy array and finally stored in a list. We will use this list of arrays (which contains image pixel values) to predict the class of images.

*** Sample code:

Sample code to resize test data images

*** Test Image Prediction:

We will now predict test images using ‘model.predict(list_of_arrays)’ where ‘model’ is the trained model and ‘list_of_arrays’ contains image pixel values. Output of prediction will be binary (1 or 0). As we know class indices from ‘class_indices’ function i.e 1 as Small and 0 as Large, we will replace class 1 with ‘Small’ and class 0 with ‘Large’.

This is how final dataframe with predicted class look like-

Predicted class

*** Test Data Predicted Score (100 * f1_score):

Submitted the test images predicted class in competition’s submission field and got a score (100 * f1_score) of 99.76109.

Test data predicted score

*** Thank you all for reading this blog. Your suggestions are very much appreciated! ***