- Home
- Training Library
- Big Data
- Courses
- Getting Started With Deep Learning: Recurrent Neural Networks

# Time Series Forecasting

## Contents

###### Recurrent Neural Networks

## The course is part of this learning path

**Difficulty**Beginner

**Duration**45m

**Students**171

**Ratings**

### Description

From the internals of a neural net to solving problems with neural networks to understanding how they work internally, this course expertly covers the essentials needed to succeed in machine learning.

This course moves on from cloud computing power and covers Recurrent Neural Networks. Learn how to use recurrent neural networks to train more complex models.

Understand how models are built to allow us to treat data that comes in sequences. Examples of this could include unstructured text, music, and even movies.

This course is comprised of 9 lectures with 2 accompanying exercises.

**Learning Objective**

- Understand how recurrent neural network models are built
- Learn the various applications of recurrent neural networks

**Prerequisites**

- It is recommended to complete the Introduction to Data and Machine Learning course before starting.

### Transcript

With neural networks. We're going to use a small dataset coming from the Canada Bureau of Statistics. This is the monthly retail sales from 1991 to present days and the first thing we're going to do is to reshuffle the data a little bit to do a bit of the data cleaning in order to set the dates to be the index and Pandas deals with datetimes really well. So, we are going to take the df Adjustment column, so this column, here these dates are actually strings and pass it to the pd.to_datetime like you did in one of the first sections so that now this object is a datetime object. Since we want to keep the date to be the last day of the month, we also add the MonthEnd and resave this to the same column, the df Adjustment. Then we set the index of the dataframe to be the new column and so, what this means is now our dataframe is indexed by the date and so, we have Unadjusted and Seasonally Adjusted. This allows us to do a df.plot and very easily display our two series, so we have the non-adjusted values in blue and the seasonally adjusted value in orange. And as you can see, the 2008 financial crisis is very clearly visible but otherwise it's very clear growth over all the time. Now, as I've mentioned in class, it's very important when you do time series to split, train and test with respect to a certain date, so you don't want your test data to come before your training data, so that's what we're gonna do next. We're going to define a data as January 1st, 2011 as our split date and then we're going to split our unadjusted data, we're going to predict the unadjusted data, so we're going to split the unadjusted data from the beginning up to the split date and from the split date onwards to be our train and test set, so if we plot just those two, you can see that we use this data for training and the following few years for testing. Perfect.

The next thing we're going to do is rescale our data. Now, one thing that is important when you do train, test, split and rescaling is to fit and transform your training data and then transform just your test data. So, this is exactly what we're gonna do and the reason for this is you don't want to assume that you know the scale of your test data, so you transform everything but you only fit, so you learn what the range of data is from your training data. Practically what his means means that in the new scaled variables, we will be between zero and one up to here and the test data will be slightly bigger but that's okay. These are our new scaled data points and then the next thing we do is we define our target variable to be the next point with respect to each point, so we're building a model that given this number will predict this one and then given this number will predict this one and then given this number will predict this one. So, our model learns from the previous value and this is why we take the scaled values for training up to the second to last one and those are our training data and our training labels for our regression, for our forecasting are gonna be the same data but shifted by one. Okay, the first model we're going to build is a fully connected model with one input and one output. We use the mean_squared_error because it's a regression problem and we set an inner layer of 12 nodes. So, this is what our model looks like. We have two layers, the first one has 12 nodes and one input and the second one is the output node and has only one.

Notice that the second layer doesn't have an activation function because this is a regression problem and so, we want this node to be able to output any value. We also set the EarlyStopping callback now that we know about it and we run our training. So, as you can see, the loss is decreasing, so our model is learning from our data but at some point, it stops learning, so 20 epochs is the maximum but we got far and the EarlyStopping callback stopped the training. Since we've split the data into training and testing, we can predict the values for X_test and compare them with the actual value. So, as you can see, our model is not really good. It's essentially repeating the previous value which is actually really bad. As you can see, the two curves are shifted, so our model learn to mirror what the input is and not so well either. The fully connected model is not really able to predict the future from the single previous value. This is why we will need to use recurring neural networks in order to have some better results, so thank you for watching and see you in the next video.

# About the Author

**Students**2914

**Courses**8

**Learning paths**3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.