Source: Deep Learning on Medium

Go to the profile of Neel Sharma


Recurrent Neural Networks (RNN) are an incredible and powerful sort of neural networks and have a place with the most encouraging algorithm out there right now because they are the only ones with internal memory.

Because of their internal memory, RNN’s are able to remember important things about the input they received, which enables them to be very precise in predicting what’s coming next.

This is the reason why they are the preferred algorithm for sequential data like time series, speech, text, financial data, audio, video, weather and much more because they can form a much deeper understanding of a sequence and its context, compared to other algorithms.

Long Short-Term Memory (LSTM) networks are an extension for recurrent neural networks, which basically extends their memory. Therefore it is well suited to learn from important experiences that have very long time lags in between.

LSTM’s enable RNN’s to remember their inputs over a long period of time. This is because LSTM’s contain their information in a memory, that is much like the memory of a computer because the LSTM can read, write and delete information from its memory.

How they work

In an LSTM you have three gates: input, forget and output gate. These gates determine whether or not to let new input in (input gate), delete the information because it isn’t important (forget gate) or to let it impact the output at the current time step (output gate).

The gates in a LSTM are analog, in the form of sigmoids, meaning that they range from 0 to 1. The fact that they are analog, enables them to do backpropagation with it.

Applications of LSTM:

1.) Text Generation

2.) Handwriting recognition

3.) Music generation

4.) Language Translation


The real magic behind LSTM networks is that they are achieving almost human-level of sequence generation quality, without any magic at all.