Convolutional Neural Networks with TensorFlow

Original article was published on Artificial Intelligence on Medium

The Implementation

Prepare the data

Right before we will implement the model, we have to prepare the data. First, we will import the data. The source of it is from Digit Recognizer competition on Kaggle. It consists of 60000 rows. Then, it divides into 42000 observations for training data, and the rest for the test data of the model. After we import the data, the data looks like this at first,

The dataset still on tabular format. The columns represent the label and the pixels that correspond to the image. Because of that, we have to reshape the dataset. But before doing that, make sure you normalize the image first, then reshape the dataset. And finally, we split the training data into 2 parts, the training and the validation dataset for the model.

The code for doing that looks like this,

# Import The Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers, models
from sklearn.model_selection import train_test_split
# Import The Dataset
train = pd.read_csv('../input/digit-recognizer/train.csv')
test = pd.read_csv('../input/digit-recognizer/test.csv')
# Prepare the training set
train_image = train.drop('label', axis=1)
train_label = train['label']
# Normalize the data
train_image = train_image / 255.0
test_image = test / 255.0
# Reshaping the image
train_image = train_image.values.reshape(-1, 28, 28, 1)
test_image = test_image.values.reshape(-1, 28, 28, 1)
# Split into training and validation dataset
X_train, X_val, y_train, y_val = train_test_split(train_image, train_label, test_size=0.1, random_state=42)

For the preview of the data, it will look like this,

The Preview of The Image

Build the model

After we prepare the data, we can build the model. We will build the model based on the architecture, and we adjust with our dataset because the data has different dimensions from the original paper, which is 28 x 28 x 1. As we stated before, the flow of the model will look like this,

Conv => Pool => Conv => Pool => Fully-Connected

Here is the code to build that,

model = models.Sequential()
# Feature Extraction Section (The Convolution and The Pooling Layer)
model.add(layers.Conv2D(filters=6, kernel_size=(5, 5), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.AveragePooling2D())
model.add(layers.Conv2D(filters=16, kernel_size=(5, 5), activation='relu'))
model.add(layers.AveragePooling2D())
# Reshape the image into one-dimensional vector
model.add(layers.Flatten())
# Classification Section (The Fully Connected Layer)
model.add(layers.Dense(120, activation='relu'))
model.add(layers.Dense(84, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
# Show summary of the model
model.summary()

After we build the model, the summary will look like this,

The Summary of The Model

As we can see, we can see that the dimension of the image got decrease, but the number of filters got an increase. After we extract the feature, we reshape the image into a one-dimensional vector to predict it. Also, we can see that each layer, except the pooling layer, will have the number of parameters to it. In this case, there are 44426 layers to learn by the model. The model will learn those parameters to achieve greater accuracy for predicting the images.

To fit and optimize the model, the code will look like this,

# Compile The Model
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
# Fit And Evaluate The Model Using Validation Dataset
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_val, y_val))
# Evaluate The Model Using Plot
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.95, 1])
plt.legend(loc='lower right')

After that, The model fits the data, and it repeats several times based on the number of epochs. For the evaluation, the model achieves the accuracy like this,

The Fitting Process
The line plot of epoch with the accuracy result

As we can see, we conclude that the number epoch of 5 achieve slightly better than any epoch numbers. Therefore, we can build the model only with 5 epochs, and we can use it to predict values.