image
Time Series Forecasting with LSTM
Start course
Difficulty
Beginner
Duration
45m
Students
1053
Ratings
4.2/5
starstarstarstarstar-half
Description

From the internals of a neural net to solving problems with neural networks to understanding how they work internally, this course expertly covers the essentials needed to succeed in machine learning.

This course moves on from cloud computing power and covers Recurrent Neural Networks. Learn how to use recurrent neural networks to train more complex models.

Understand how models are built to allow us to treat data that comes in sequences. Examples of this could include unstructured text, music, and even movies.

This course is comprised of 9 lectures with 2 accompanying exercises.

Learning Objective

  • Understand how recurrent neural network models are built
  • Learn the various applications of recurrent neural networks

Prerequisites

 

Transcript

Hello and welcome back. In the last video, we built a sequential, fully connected model that took one data point in input and tried to predict the next data point. As you see, it didn't do very well and so now we're going to build a recurrent predictor that will hopefully perform a little better. So, the first thing we're gonna do is load the LSTM layer from the Keras layers module, and this is going to be our recurrent layer. Also, our recurrent layer requires an input with the following shape: the batch, size, the number of timesteps, and the input dimension. So in our case we have only one point, so input dimension is gonna be one, and timesteps is also gonna be one, so we need to expand this tensor to in order 3D tensor, and we do this with the next line, so x_train every element comma none adds an empty dimension to our tensor.

 So we're going to call this x_train, T, for tensor, and reshape both x_train and x_test. The next step, we're going to build our model so still a sequential API, one layer LSTM with six nodes and then an input shape of one and one, one timestep and one number. The last layer is dense and we compile the model with the mean squared error loss and Adam optimizer. Then we train the model with early stop callback and see how it goes. So training pretty fast, because our dataset is small. Notice that I've given it a batch size of one, so I'm training at each data point. I'll let it train for a while and then when it stops, I'll verify how it's doing. 

It stopped after 13 epochs because the loss was not improving any longer so let's see our predictions on the test set. And the recurrent model did not do very much better than the sequential fully connected model. So, it does seem to recall a little bit better these last values but it's also like the previous one, it's also lagging by one, so it's essentially learned to reproduce a little bit better the previous value, but it's still not good. What we wanted, we want the two curves to be overlapping and right now they are not overlapping. So we need to still improve our model and we'll do this by using Windows in one of the next videos so for now, thank you for watching and see you in the next video.

About the Author
Students
8911
Courses
8
Learning Paths
8

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.