image
Exercise 4: Solution
Start course
Difficulty
Beginner
Duration
1h 45m
Students
1219
Ratings
4.7/5
Description

Learn about the importance of gradient descent and backpropagation, under the umbrella of Data and Machine Learning, from Cloud Academy.

From the internals of a neural net to solving problems with neural networks to understanding how they work internally, this course expertly covers the essentials needed to succeed in machine learning.

Learning Objective

  • Understand the importance of gradient descent and backpropagation
  • Be able to build your own neural network by the end of the course

Prerequisites

 

Transcript

Hey, guys. Welcome to exercise four solution, in section five. In this exercise, we will use a very powerful Keras construct, which are callbacks. So, callbacks are functions that can be called each time a validation cycle ends. So, at each epoch. And you are asked... this exercise asks you to test and implement three different callbacks. One is the EarlyStopping, the other is the ModelCheckpoint, and the third one is the TensorBoard. So... let's do that. First of all, we import the callbacks, and we initialize them. So, checkpointer. We create an instance of the ModelCheckpoint class. We have to define a filepath. 

So, I'll put it at temporary file. And then I specify the save_best_only parameter. Which, as we can see in the documentation, if save_best_only is equals true, the latest best model will not be overwritten. And by best we mean with respect to whatever quantity we're monitoring. The next callback I'm going to initialize is the EarlyStopping callback. So, we tell the earlystopper to monitor the validation loss. And essentially what EarlyStopping does, it monitors the validation loss, and as soon as it's not improving any longer, it will stop the fitting procedure. In this way, we can set the fitting number of box to a high number and then let the earlystopper stop it, if it doesn't improve. Finally, I'm going to set the TensorBoard callback, which is a really cool callback that allows us to visualize a bunch of quantities from our model. 

Train_test_split should have no more mysteries for you by now, so I'll just go very quick on this. And then we train the model. So, I've taken the same functional model I've just built. The only difference is I now pass the validation_data, X_test and y_test. I train on X_train and y_train, and I pass the three callbacks in a list. So, let's see what happens. If I run it, so it's monitoring the validation loss. Validation loss is decreasing, and it reaches the end. As you can see, we also received messages from our callbacks. For example, the CheckPointer tells us that the monitor quantity improved from one epoch to the next, and therefore, he's saving that model, the weights of that model onto a file. 

So, the combination these callbacks is really useful, because we can stop the training, and restart it from where we were, by reloading the weights. In the next video, we're going to check what the TensorBoard callback did. So, make sure to check the next video. And until then, have fun and thank you for watching. See you in the next video.

About the Author
Students
9697
Courses
8
Learning Paths
8

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.