- Home
- Training Library
- Big Data
- Courses
- Getting Started With Deep Learning: Recurrent Neural Networks

# Exercise 1: Solution

## The course is part of this learning path

## Contents

###### Recurrent Neural Networks

**Difficulty**Beginner

**Duration**45m

**Students**57

**Ratings**

### Description

From the internals of a neural net to solving problems with neural networks to understanding how they work internally, this course expertly covers the essentials needed to succeed in machine learning.

This course moves on from cloud computing power and covers Recurrent Neural Networks. Learn how to use recurrent neural networks to train more complex models.

Understand how models are built to allow us to treat data that comes in sequences. Examples of this could include unstructured text, music, and even movies.

This course is comprised of 9 lectures with 2 accompanying exercises.

**Learning Objective**

- Understand how recurrent neural network models are built
- Learn the various applications of recurrent neural networks

**Prerequisites**

- It is recommended to complete the Introduction to Data and Machine Learning course before starting.

### Transcript

Hello and welcome to the solution of exercise one in section eight. Exercise one was asking us to reshape the data as a sequence of 12 values, each being a single number. So this is pretty easy. We use the reshape function. We use the reshape method on a NumPy array. And as you can see now, our X_train tensor has a shape of 228 by 12 timestamps by one, the size of the vector. So we can pass this to our previous keras.models, to our previous LSTM model. The only thing we need to change is the shape of the input. So we do that and our model now has less parameters because the input size is one and not 12.

So we run that model, again with early stopping, and we'll see how well it does. Okay, our model stopped after nine epochs and we can already see that the loss is a bit higher than the previous losses. So if we plot what it got, we see that it didn't really learn the pattern very well. If we remove the early stopping condition and increase the batch size and run the training for much, much longer, this model does learn to reproduce our data really well and even on the test data it performs very well. So this is an important lesson. LSTMs perform amazingly well on sequences, but they do take a lot of time to train. So whenever you are using an LSTM, be prepared to let the training run for a long time. So thank you for watching and see you in the next video.

### About the Author

**Students**1048

**Courses**8

**Learning paths**3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.