Shedding lights on the Autoencoder [Part1: Introduction]

Original article was published by Miloud Belarebia on Deep Learning on Medium

Shedding lights on the Autoencoder [Part1: Introduction]

An autoencoder is a type of artificial neural network used to learn efficient data coding in an unsupervised (Unlabeled data) manner [1]. In general, it is based on encoding and decoding a message. In this article, we will see:

  • Arabic phone game
  • Origin of the autoencoder
  • How it works
  • Different types
  • Applications
  • Conclusion

Arabic phone game

In each time passes, we will observe that the human eye and brain are awesome tools and their power cannot be better understood than when we try to imitate them. I will present the game “Arabic Phone” as our reference to explain in a simple way the Autoencoder algorithm. The Game “Arabic Phone” process is as below:

  • We have a classroom with group of people
  • We let out 4 to 5 people (5 students)
  • The trainer reads a relatively detailed text (the rumor) to a person in the classroom.
  • This one must transmit the rumor, from memory, to one of the people left.
  • Then it’s her turn to bring in the next person and pass the rumor on to them, still from memory.
  • So on until the last person, who gives their version back to the large group.
  • We then read the original text to see everything that has been changed.

The final observation is that the initial message has been modified. The first messenger is transmitting the message and the next participant is encoding the information, then he tries to decode the message using the available tools of his brain, according to his background and then he can transmit it to the next one. Many factors can be influencing the way each one is transmitting the information.

The game can be played in different ways. For example, we can change the languages, so the initial message is described by verbal words, movements or paintings.

Origin of the autoencoder

There is a link between this game and the autoencoder algorithm. The autoencoder history started to shine in the 80s, the concept is based on the dimension reduction approach. This one means the selection of the relevant and presentative variables for a set of many variables in an unsupervised data [2].

How it works

An Autoencoder consists of two parts, the encoder and the decoder:

  • Encoder: means encoding the input. After normalizing the data to be included in the range [0,1]. Then we format the results by transforming all the values to take the boolean form (0 or 1). Those booleans are representing the translated message received by the receiver. As explained previously in the game.
  • Decoder: signifies to dissect coding input in the best way to have the same result. Once again as in the game described at the beginning, the second person to transmit the message to the other, will try to decode what is was registered by his brain.

Between these two parts building Autoencoder, there is a step that we seek to optimize, which is:

  • Reconstruction error: it is a step between encoding and decoding. To illustrate this concept. The reconstruction error in our game, is when the second person tries to decode the message to transmit, it will be our modification between the original message and the transmitted message.

Different types

Since its appearance, the architecture of the Autoencoder has encountered an evolution, for this, we have many types, I quote [3]

  • Vanilla autoencoder
  • Regularized Autoencoders: Various techniques exist to prevent autoencoders from learning the identity function and to improve their ability to capture important information and learn richer representations. These types are:

— — — Sparse autoencoder (SAE)

— — — Denoising autoencoder (DAE

— — — Contractive autoencoder (CAE)

  • Convolutional Autoencoders
  • Generating sentences using LSTM autoencoders.


Some of the application of autoencoder are:

  • Dimensionality Reduction.
  • Anomaly Detection: detect the element that deviates from what is normal in our data
  • Image Processing: we use autoencoders for image reconstruction.
  • Machine Translation: use the power of the autoencoder to translate text from one language to another

Among the deep learning libraries to use autoencoder under python, we recite

  • Keras
  • Tensorflow
  • Pytorch
  • Lasagne

For others deep learning libraries that applicate autoencoder, see the references [4].


This is a theoretical and basic introduction to simply understand how an autoencoder algorithm works and what are these applications, there will be three more parts

1. Explain the mathematical concept behind this algorithm.

2. Detail the different architecture of autoencoder.

3. Apply the different types of the autoencoder in python.

At the end, you can find here below the references used, and I hope that it was easy to understand for all of you. I am at your disposal to further discuss and answer your questions in the comment section.

Thanks for reading the article! If you like my article do 👏.

[1] Autoencoder Wikipedia,to%20ignore%20signal%20%E2%80%9Cnoise%E2%80%9D.

[2] Deep Learning in Neural Networks: An Overview

[3] Deep Learning with TensorFlow 2 and Keras: Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd Edition.

[4] Top 13 python deep learning libraries