image
Exercise 4: Solution
Start course
Difficulty
Beginner
Duration
1h 13m
Students
1560
Ratings
4.5/5
starstarstarstarstar-half
Description

Continue the journey to data and machine learning, with this course from Cloud Academy.

In previous courses, the core principles and foundations of Data and Machine Learning have been covered and best practices explained. 

This course gives an informative introduction to deep learning and introducing neural networks.

This course is made up of 12 expertly instructed lectures along with 4 exercises and their respective solutions.

Please note: the Pima Indians Diabetes dataset can be found at this GitHub repository or at Kaggle page mentioned throughout the course.

Learning Objectives

  • Understand the core principles of deep learning
  • Be able to execute all factors of the framework of neural nets

Intended Audience

 

 

 

Transcript

Hey, guys, welcome back. This is the walk-through of exercise four. Exercise four is not really a problem, it's more like play around with the visual tool and form an intuition around how the different parameters in a neural network work. And this tool is called TensorFlow Playground, it's a freely available tool from Google. And what it does, it allows you to essentially play with the neural net. You can visually add the number of layers and the number of neurons per layer, and play around with different datasets. So, let's start from this dataset, and you know, the pre-configured number of neurons and layers. And the way you run it is you hit Play. So, what we see is that our model is learning the boundary between our classes. And here you see displayed the loss on the training and on the test set. Now, what's interesting is that this model is also visualizing the decision boundary of each individual neuron, right. And so we can see how this neuron, for example, is taking, we only have two input features, right, the X and the Y. So this is giving a weight of -0.43 to the first feature and -0.36 to the second feature, and it's finding this boundary. In the second layer, this neuron is taking, since it is fully connected, it's taking input from all these four, and its decision boundary is this guy here, which is, you know, minus 1.3 times this guy, plus 1.1 times this guy, and so on and so forth. 

The decision function, sorry, the activation function we are using at each layer, in the inner layers is this Tanh. So let's see what happens if we change the decision function to another one. Let's retrain the model. It still actually converges much faster, but notice how the boundaries learned by our neurons have these kind of like straight lines. And that's because we changed the activation function, and so the boundaries of each of the neurons are much sharper type boundaries, like these guys. Let's play around with it a little more. For example, let's make the Learning rate very small and see what happens. We reset it and we run it and we see that it's gradually converging but it's taking really, really long time. That's because the learning rate, that we see like more formally what that means, our learning rate is very small. Let's see what happens if I make it huge. So, if I make it huge, our model is not learning anymore. Somehow, the model is exploded to a certain loss, not doing anything. So I must find kind of a nice balance between crazy high and crazy low. This seems to be okay. Right, let's check another thing. What if I make less layers and less neurons, right? What if I do this? Will my model still be able to converge? Let's see. Okay, maybe not. 

Maybe doesn't have enough complexity to actually separate the two classes. Okay, so I'll try again. Reset, restart, but doesn't seem to be able to make it. Maybe yes, this time it made it. Okay, it's not perfect but somehow it learned combined the three into this and it's actually doing quite better. But if I add a neuron, we start, now it's working pretty well. Okay, so we've explored this. There are other things you can explore, like Regularization, and we'll talk about that. You can also play around with other datasets, like this one, for example, is really challenging if I run that. See, like no way a neural net this simple can actually solve this. But yes, the idea is you should get a sense for what each of these things is doing, and even though you don't understand everything, just play around with it. See if you can make it converge for a harder dataset. Look at the fact of each of the components, and the neurons, the layers, and so on. And then, in the rest of the course, we'll have plenty of time to actually understand what each of these things does. So, good luck, enjoy and see you in the next video.

About the Author
Students
8893
Courses
8
Learning Paths
8

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.