1. Home
  2. Training Library
  3. Big Data
  4. Courses
  5. Getting Started With Deep Learning: Improving Performance

Dropout and Regularization

Developed with
Catalit
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 2m
Students64
Ratings
5/5
star star star star star

Description

Move on from what you learned from studying the principles of recurrent neural networks, and how they can solve problems involving sequencing, with this cohesive course on Improving Performace. Learn to improve the performance of your neural networks by starting with learning curves that allow you to answer the right questions. This could be needing more data or, even, building a better model to improve your performance.

Further into the course, you will explore the fundamentals around bash normalization, drop-out, and regularization.

This course also touches on data augmentation and its ability to allow you to build new data from your starting training data, culminating in hyper-parameter optimization. This is a tool to that aids in helping you to decide how to tune the external parameters of your network.

This course is made up of 13 lectures and three accompanying exercises. This Cloud Academy course is in collaboration with Catalit.

 Learning Objectives

  • Learn how to improve the performance of your neural networks.
  • Learn the skills necessary to make executive decisions when working with neural networks.

Intended Audience

Transcript

Hey guys, welcome back. In this video I'm going to show you how to add dropout and weight regularization to a layer. Let's start with dropout. Dropout is implemented in keras as a separate layer. So we have to import it from keras layers, and then we can add it like any other layer to our sequential model. So, in this case, for example, I built a little model with a dropout layer at the very beginning, so that means we drop 20% of the input. So you see it has an input shape that it's equal shape to the 20 shape. And then I've added a second dropout layer, right after the first fully connected. So there's a fully connected layer with 512 nodes, and then a dropout with 40% drop. So notice that the rate in the dropout layer is defined as the fraction of the input units to drop. So this is very important. 

So what this means is we're dropping 40% of the units whereas here, we're dropping 20% of the input units. So this is how you add dropout to a network, and this may help, especially in large networks. In small networks it may or may not help, but in large network it has been shown to actually help training and convergence. So give it a try, see how it works for you. The other thing I wanted to show you is how to introduce regularization in a layer. And it's literally just another parameter in the dense layer API. So, as you can see, you can regularize the weight, this is the kernel regularizer. You can regularize the biases. You can even regularize the activity. And the available parameters are L1 and L2, so you can choose between the L2 norm and the L1 norm. Have a look at the initializers documentation to know more. I just wanted to show you that if you want, you can introduce it in your model. Do in the exercise, you'll get to play with both dropout and regularization. For now, thank you for watching, and see you in the next video.

About the Author

Students1102
Courses8
Learning paths3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.