Understanding sequential/TimeSeries data for LSTM….

Source: Deep Learning on Medium


Go to the profile of Raman Shinde

LSTM is very confusing, especially for beginners. Recently I was working on a deep Learning case study of Human Activity Recognization in which the dataset provided is time series data. The Dataset is provided here.

The question is given time series data, How to frame it so that it can be fed to LSTM..?

For simplicity let’s consider we have a 1-axial time series signal from two sensors Accelerometer and Gyroscope. The signal would look like this…

Let’s consider only one signal. We can sample the signal(cut down) with a sliding fixed-width window of 2.56 sec. i.e The first slice will contain a signal from 0 to 2.56 sec. Next one will contain 2.56 to 5.12 and so on. From each slice (0 to 2.56 sec) we can take 128 readings by sampling it. This will act as a number of time steps/Sequence length for LSTM.

eg.

Similarly, we can do the same for other time-series signals. Then construct a vector from the readings. The resultant vector will have the size of [1×128].

How do we represent this data..?

Just transform the vector to the column vector and concatenate vectors from each feature. we will get 2D tensor (time_steps, features) for 1 slice of time.

A single data sample

Remember LSTM requires input as a 3D tensor (batch_size,time_steps,features)

In the same way, we can obtain more data samples from the remaining slices we did earlier. For e.g., if we have a signal of time length 2600 sec. We can get 1000 slices each of 2.56 sec. from it.

So, our final tensor which we can feed to LSTM will look like below. (1000,128, 2)

We can feed this tensor to our LSTM. The code is as below. We are considering 5 lstm units.

How Data is processed…?

Consider a single sample for simplicity. So, input to LSTM will be (1,128, 2 ). So, we have 128 two dimensional row vectors representing two features. Each is passed to lstm sequentially.

This 128 two dimensional row vectors can be passed to LSTM we have considered. The unroll version of LSTM is shown below…

EndNote:

  1. Thanks a lot if you have reached here. This is my first attempt in blogging so I expect the readers to be a bit generous and ignore the minor mistakes I might have made.
  2. All the values considered in this example are selected as per convenience.
  3. If the article appeals to you, do provide some comments, feedback, constructive criticism, etc.

Referances:

  1. https://stackoverflow.com/questions/51749404/how-to-connect-lstm-layers-in-keras-repeatvector-or-return-sequence-true
  2. https://stats.stackexchange.com/questions/274478/understanding-input-shape-parameter-in-lstm-with-keras