- Home
- Training Library
- Big Data
- Courses
- Getting Started With Deep Learning: Recurrent Neural Networks

# Rolling Windows Continued

## Contents

###### Recurrent Neural Networks

## The course is part of this learning path

**Difficulty**Beginner

**Duration**45m

**Students**218

**Ratings**

### Description

From the internals of a neural net to solving problems with neural networks to understanding how they work internally, this course expertly covers the essentials needed to succeed in machine learning.

This course moves on from cloud computing power and covers Recurrent Neural Networks. Learn how to use recurrent neural networks to train more complex models.

Understand how models are built to allow us to treat data that comes in sequences. Examples of this could include unstructured text, music, and even movies.

This course is comprised of 9 lectures with 2 accompanying exercises.

**Learning Objective**

- Understand how recurrent neural network models are built
- Learn the various applications of recurrent neural networks

**Prerequisites**

- It is recommended to complete the Introduction to Data and Machine Learning course before starting.

### Transcript

Hello and welcome back. In this video we're going to use a recurrent neural network that predicts the current value of a time series using a few values in the past. We are also going to be able to fully connect that model that does the same thing. The first thing we're going to do is build our scale time series and we do this by using the shift function from Pandas, so we have this initial time series which is our times series of data and we go back in time for 12 months, so we say from for a shift in the range from one to 13 excluded, we shift the DataFrame scaled, so the column Scaled, we shift it by the period and so, and we save that values into a column that says shift and then the number.

A little complicated but this is what's going on, so you can see this is our original data and then the shift by one is nothing but our same data shifted down by one month which means that for this guy, this is the previous month and for this guy, this is the previous month and this is two months before, so we've created these 12 shifts that are going back in time and what we're going to do is then drop the columns with null values, so we basically drop the first year of data and start from the second year, so we drop the null values from the DataFrame and then we also drop the column Scaled which is going to be our target. So, we create X_train which are the shifts and y_train which is Scaled and same thing for testing. This is what X_train looks like and the shape is 228 points and 12 shifts, 12 months each. Now we create our NumPy arrays and then train our models. So, let's start with the Fully Connected on the Windows. So, it will have 12 nodes in input, we have an inner layer with 12 nodes and then one layer in output. So, this is what our model looks like. We have 169 parameters and we fit it again with the callback for EarlyStopping. The Fully Connected model stopped improving after 25 epochs and so, the EarlyStopping callback stopped the training. We are going to predict the test and compare it and we see that this model is now doing a lot better in predicting the current value.

So, you see we don't have that shift any longer and the model using the data from the previous year, the model seems to be doing quite well in predicting the current month. So, this is already a great improvement. Let's see what the performance of the recurring neural network is gonna be. So, in order to do that, we reshape our data to a tensor and I just wanna show you how I reshaped it, so if I print the shape, we have one time instance and 12 and a vector of 12 coordinates. So, the way I'm treating this data, I'm passing all the 12 months at once as if they were a snapshot in time of the last year. We could have and in fact you'll do it in one of the exercises, we could have reshaped it so that we were passing 12 consecutive months one at a time. We build our model with an input shape of one, 12 and before I fit the model, I'm going to show you the number of parameters.

So, the model only has one layer and this layer has six LSTM nodes. If you're confused by the number of parameters of this model, keep in mind that we have 12 inputs to our LSTM and the LSTM has three gates, each of which has a matrix of weights, so six LSTM nodes, each receiving 12 inputs and each with three gates. If you do the calculations, this is the number of parameters you're gonna get. We're going to train the model with the EarlyStop callback and it's still pretty fast, so we'll let it run and see how good it gets. Our model ran for 22 epochs and then it stopped to the early stopping, so let's see how it goes. It's pretty nice. As you can see, this model too has learned to reproduce the yearly shape of our data and it doesn't have that lag that it used to have with only one data input. It's still underestimating our high months by a certain amount and so, probably there is room for improvement in this model by changing the configuration or by changing the optimizer. So far I'm pretty satisfied with this. So, I'll let you do it in the exercise to try the other way of doing it with passing 12 periods one at a time to the LSTM. So, thank you for watching and see you in the next video.

**Students**3626

**Courses**8

**Learning paths**4

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.