LeNet-5: The 22-year-old neural network

Original article was published on Deep Learning on Medium

LeNet-5: The 22-year-old neural network

As a computer vision fan, I have decided to start my blog with an overview of deep learning architectures used in computer vision throughout history. I will introduce one architecture per post and try to filter out most of the unnecessary theory. For most posts, I will upload an accompanying notebook to GitHub and include in it a Google Colab notebook link so my readers can run their experiments and play with the presented architecture, please scroll down to the resources section.

The first architecture that I want to introduce is LeNet, which was first presented to the deep learning community in “Gradient-Based Learning Applied to Document Recognition” by LeCun et al. in 1998. The paper mentioned how neural networks could outperform hard-coded rules for automatic speech and handwriting recognition.

The LeNet architecture is relatively simple and consists only of two blocks of CNN and subsampling layers followed by one fully connected layer. Figure 1 shows the original diagram of LeNet’s architecture. Figure 2 is the Keras implementation of LeNet.

Figure 1 — LeNet’s Architecture. Source: Yann Lecun et al. “Gradient-based learning applied to document recognition.” In: Proceedings of the IEEE. 1998, pages 2278–2324

Figure 2 — LeNet implementation in Keras.

Yann Lecun et al. proposed a handwriting recognition system, so it is only fair to train and test LeNet’s architecture on the MNIST Handwritten Dataset despite how much we dread it. For future posts, I promise to keep the use of MNIST to the minimum. Figure 3 and Figure 4 show the training history of the model.

Figure 3 — Loss vs. Epochs.

Figure 4 — Accuracy vs. Epochs.

The number of epochs used was only three because the network starts to overfit after the third epoch. Despite the small number of epochs and the age of LeNet, it achieved an impressive 99.14% accuracy in the training set and 98.86% accuracy in the validation set, not bad for a 22-year-old neural network!

To conclude this post, I’d like to invite you to read LeNet’s paper. It is a lengthy and math-heavy paper, but it explains in-depth how researchers were able to create a highly accurate handwriting recognition tool more than 20 years ago.

Resources

Paper: http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf

Github: https://github.com/fescobar96/Computer-Vision-Architectures/blob/master/LeNet_5_on_MNIST.ipynb

References

  1. Yann Lecun et al. “Gradient-based learning applied to document recognition”. In: Proceedings of the IEEE. 1998, pages 2278–2324