1. Home
  2. Training Library
  3. Big Data
  4. Courses
  5. Getting Started With Deep Learning: Working With Data: Gradient Descent

Exercise 2: Solution

Developed with
Catalit
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 45m
Students147
Ratings
5/5
star star star star star

Description

Learn about the importance of gradient descent and backpropagation, under the umbrella of Data and Machine Learning, from Cloud Academy.

From the internals of a neural net to solving problems with neural networks to understanding how they work internally, this course expertly covers the essentials needed to succeed in machine learning.

Learning Objective

  • Understand the importance of gradient descent and backpropagation
  • Be able to build your own neural network by the end of the course

Prerequisites

 

Transcript

Hey guys, welcome to the Solution of Exercise Two in Section Five. In this exercise, we are told which network to build and train, and we have to define a feature function to check how well our model is separating the data and what are the higher level features it's finding at layer number three. So first thing we are going to build the model, eight nodes, five nodes, two nodes and then three. The reason we want two nodes in the third layer is that we can plot these two features on a plot. I'm also going to change the activation function to a tanh with respect to the previous model. Feel free to experiment and see what works for ya. 

I'm going to train the model on all the data with no valuations split for 20 epochs and see the accuracy on a training set it reaches 100 percent. Now I don't know if I'm over fitting or not because I've not separatedtraining and test, but it doesn't matter because in this case we are interested in seeing how the model is learning the features and being in layers. So the layer we are going to look at is the dense layer number three where we have only two nodes and so we're going to extract the output of those two nodes and plot those on a two-dimensional plot. So we take the input of layer zero and the output of layer two which is the third layer cause we count from zero. And we define a function between the input and the output. 

Now with this function, we can take the input and apply the function and take the result and store it in some values and then plot these values, so features is going to be at two dimensional array with 178 points and two columns, so 178 rows and two columns and we can plot each column with different colors based on the category. And as you can see, the network in these two feature representation at the third layer, has learned very well to separate the three classes. This is amazing if the powerful of Neural networks that the deeper you go in the layers the better representation they find for your data in order to achieve the goal you've given in order to minimize the cost. 

So Neural networks are really feature learners, they learn the best features in order to solve a problem, and we will see that this is an amazing property when we are dealing with images or with more complexed data. So don't worry if we're insisting on simple data sets and small data sizes, we have the rest of the course to focus on images and more complexed data, but it's important that you validate this concept already with small data set sizes where things are not as complicated. So thank you for watching and see you in the next video.

About the Author

Students1112
Courses8
Learning paths3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.