Understanding And Implementing Dropout In TensorFlow And Keras

Original article was published on Artificial Intelligence on Medium

Implementing Dropout Technique

Using TensorFlow and Keras, we are equipped with the tools to implement a neural network that utilizes the dropout technique by including dropout layers within the neural network architecture.

We only need to add one line to include a dropout layer within a more extensive neural network architecture. The Dropout class takes a few arguments, but for now, we are only concerned with the ‘rate’ argument. The dropout rate is a hyperparameter that represents the likelihood of a neuron activation been set to zero during a training step. The rate argument can take values between 0 and 1.

keras.layers.Dropout(rate=0.2)

From this point onwards, we will go through small steps taken to implement, train and evaluate a neural network.

  1. Load tools and libraries utilized, Keras and TensorFlow
import tensorflow as tf
from tensorflow import keras

2. Load the FashionMNIST dataset, normalize images and partition dataset into test, training and validation data.

(train_images, train_labels),(test_images, test_labels) = keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
test_images = test_images / 255.0
validation_images = train_images[:5000]
validation_labels = train_labels[:5000]

3. Create a custom model that includes a dropout layer using the Keras Model Class API.

class CustomModel(keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.input_layer = keras.layers.Flatten(input_shape=(28,28))
self.hidden1 = keras.layers.Dense(200, activation='relu')
self.hidden2 = keras.layers.Dense(100, activation='relu')
self.hidden3 = keras.layers.Dense(60, activation='relu')
self.output_layer = keras.layers.Dense(10, activation='softmax')
self.dropout_layer = keras.layers.Dropout(rate=0.2)

def call(self, input):
input_layer = self.input_layer(input)
input_layer = self.dropout_layer(input_layer)
hidden1 = self.hidden1(input_layer)
hidden1 = self.dropout_layer(hidden1)
hidden2 = self.hidden2(hidden1)
hidden2 = self.dropout_layer(hidden2)
hidden3 = self.hidden3(hidden2)
hidden3 = self.dropout_layer(hidden3)
output_layer = self.output_layer(hidden3)
return output_layer

4. Load the implemented model and initialize both optimizers and hyperparameters.

model = CustomModel()
sgd = keras.optimizers.SGD(lr=0.01)
model.compile(loss="sparse_categorical_crossentropy", optimizer=sgd, metrics=["accuracy"])

5. Train the model for a total of 60 epochs

model.fit(train_images, train_labels, epochs=60, validation_data=(validation_images, validation_labels))

6. Evaluate the model on the test dataset

model.evaluate(test_images, test_labels)

The result of the evaluation will look similar to the example evaluation result below:

10000/10000 [==============================] - 0s 34us/sample - loss: 0.3230 - accuracy: 0.8812[0.32301584649085996, 0.8812]

The accuracy shown in the evaluation result example corresponds to the accuracy of our model of 88%.

With some fine-tuning and training with more significant epoch numbers, the accuracy could be increased by a few percentages.

Here’s a GitHub repository for the code presented in this article.