## The course is part of these learning paths

From the internals of a neural net to solving problems with neural networks to understanding how they work internally, this course expertly covers the essentials needed to succeed in machine learning.

This course moves on from cloud computing power and covers Recurrent Neural Networks. Learn how to use recurrent neural networks to train more complex models.

Understand how models are built to allow us to treat data that comes in sequences. Examples of this could include unstructured text, music, and even movies.

This course is comprised of 9 lectures with 2 accompanying exercises.

**Learning Objective**

- Understand how recurrent neural network models are built
- Learn the various applications of recurrent neural networks

**Prerequisites**

- It is recommended to complete the Introduction to Data and Machine Learning course before starting.

Hello, and welcome to this video on rolling windows. In this video, we will talk about rolling windows which are a technique to extract features from a time series. In all the problems encountered so far we've always asked our networks to predict the future given the knowledge of their internal state and of the current value of the sequence. What if instead of feeding just one input value at each step, we fed a short history of previous values? This is called the rolling window approach. And it's pretty simple to understand. We take a fixed size window in our sequence and use all the data contained in the window to predict values of the sequence after the window. Then we slide the window by a certain amount, usually half the length of the window, and formulate a new prediction.

And so on, we can continue shifting the window by a fixed amount. The rolling windows approach has been used in many successful applications. And, in fact, it existed much before neural networks were invented. It can be used in general with machine learning and traditional features. We compute features at each window and then pass these features to a model that will predict the future based on them. In the case of Deep Learning, we have two options on how to use these windows. We can feed the window to a recurring neural net in sequence one point at a time asking for a prediction at the end. This is a many-to-one sequence problem that we've just encountered. We can also do something different, we can feed the window all at once using each point in the past as a different feature. If we do this we are not bound to the use of just RNNs. We can also use convolutional or fully-connected neural networks because we've mapped the original sequence problem onto a one-to-one problem where the input space is a vector. In this video we've introduced the windowing technique for time series and explained how you can extract features from such windows. We've also explained how predictions are formulated from windows features. So, thank you for watching and see you in the next video.

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.