image
Sequence Problems
Start course
Difficulty
Beginner
Duration
45m
Students
1049
Ratings
4.2/5
starstarstarstarstar-half
Description

From the internals of a neural net to solving problems with neural networks to understanding how they work internally, this course expertly covers the essentials needed to succeed in machine learning.

This course moves on from cloud computing power and covers Recurrent Neural Networks. Learn how to use recurrent neural networks to train more complex models.

Understand how models are built to allow us to treat data that comes in sequences. Examples of this could include unstructured text, music, and even movies.

This course is comprised of 9 lectures with 2 accompanying exercises.

Learning Objective

  • Understand how recurrent neural network models are built
  • Learn the various applications of recurrent neural networks

Prerequisites

 

Transcript

Hello and welcome to this video on Sequence Problems. In this video, we will introduce other type of problems involving sequences. And we will also introduce the concept of Recurrent Neural Net. Times series problems can be extended to consider general problems involving sequences. Let's see a few of them. The simplest machine learning problem involving a sequence is the one to one problem. In this case, we have one data input to the model, and this could also be a tensor input, not necessarily just the number. And the model generates a prediction with the given input. All the problems we've encountered so far fall in this category. Linear regression, classification, and even image classification with convolutional neural net, are all one to one problems. Let's see why. In the case of digits classification, for each input image, we have one digit, one label. 

When we're classifying purchases for users, for each user we have a purchase label, and when we're distinguishing bank notes from fake to real, for each bank note we have one label, fake or real. In the case of sequences, we can extend this formulation to allow for the model to make use of the past values of the input, and of the output. The one to many problems starts like the one to one problem. We have an input to the model, and the model generates one output. However, the output of the model is fed back to the network as a new input, and the network can generate a new output, and we can continue like this indefinitely. 

A typical example of this situation is image captioning. A single image in input generates as output a text description of arbitrary length. I guess you can already start to see why these are called Recurrent Neural Networks. The many to one problem reverses the situation. We feed multiple input to the network, and at each step, we also feed the network output back into the network, until we reach the end of the input sequence, at which point, we look at the network output. Text sentiment analysis falls in this category, because, we associate a single output sentiment value, either positive or negative, to a string of text of arbitrary length in input. Then we go to the many to many case. In the many to many case, we have a sequence in input, and a sequence in output. This is the case of text translation where input sentence of arbitrary length is translated to an output sentence in a different language. Finally, there's the synchronous many to many case. Where the network outputs a value at each input. 

Considering both the input in its previous state, video frame classification falls in this category. Because for each frame, we produce a label using the information from the frame, but also, the information from the state of the network. Recurrent Neural Networks can deal with all these sequence problems, because their connections form a directed cycle. In other words, they are able to retain state from one iteration to the next, by using their own output as input for the next step. This is similar to infinite response filters in signal processing. In programming terms, this is like running a fixed program with certain inputs and some internal variables. Viewed this way, RNNs essentially describe programs. In fact, RNNs are Turing Complete, which means they can simulate arbitrary programs. If we think of feedforward neural networks as approximating arbitrary functions, Recurring Neural Networks are approximately arbitrary programs. And this makes them really, really powerful. In conclusion, in this video, we've introduced various types of sequence problems. And we've also introduced the Recurrent Neural Networks, which are a very powerful technique in Deep Learning. Thank you for watching and see you in the next video.

About the Author
Students
8883
Courses
8
Learning Paths
8

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.