CloudAcademy
  1. Home
  2. Training Library
  3. Big Data
  4. Courses
  5. Getting Started With Deep Learning: Improving Performance

Continuous Training

Developed with
Catalit
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 2m
Students42

Description

Move on from what you learned from studying the principles of recurrent neural networks, and how they can solve problems involving sequencing, with this cohesive course on Improving Performace. Learn to improve the performance of your neural networks by starting with learning curves that allow you to answer the right questions. This could be needing more data or, even, building a better model to improve your performance.

Further into the course, you will explore the fundamentals around bash normalization, drop-out, and regularization.

This course also touches on data augmentation and its ability to allow you to build new data from your starting training data, culminating in hyper-parameter optimization. This is a tool to that aids in helping you to decide how to tune the external parameters of your network.

This course is made up of 13 lectures and three accompanying exercises. This Cloud Academy course is in collaboration with Catalit.

 Learning Objectives

  • Learn how to improve the performance of your neural networks.
  • Learn the skills necessary to make executive decisions when working with neural networks.

Intended Audience

Transcript

Hey guys, welcome back. In this video we're going to talk about continuous training. Continuous training is a process from which we flow data continuously and we modify it and our training set becomes a generator that feeds data continuously to the model. So in Keras we can do that with images thanks to the class image data generator. So this is a class that is available in Keras pre-processing image and it does exactly what the name says, it's an image data generator. So remember all those transformations we talked about, changing the scale, shifting the image, rotating the image and so on. These are all available in the image data generator and so we can use just this and have our images changed and converted. Let's have a look at the available parameters. We can rescale the data and here we are rescaling by one over 255 so that our pixels will be between zero and one. Then we can shift the width and the height so this gives us up, down, left, right. 

We can rotate by a certain angle. This angle is in degrees and we can also shear the image by a small amount. Finally we have a zoom range and we allow for horizontal flipping, random flipping of the image. There are also other parameters available that I encourage you to have a look at. The documentation as you know it's our type so have a look at these parameters and see what you can do with the image data generator. The generator is only half of the work. Once we've initialized our generator the generator exposes a few interesting methods. So let's have a look at them. We have the metal flow which will take a data set of x and y and flow from the data set that must be in memory by modifying the data set. The more interesting command is flow from directory because in this case we can just pass the directory with a bunch of images and the generator will take care of loading the images in memory modifying them on the fly and passing them to our model. So we don't need to create the data set and store it or load it. 

We just need to have a set of images as a starting point and then the generator will modify them and feed them to the model in real time, which is amazing. Okay so how do we do that? We take generator flow from directory, we specify the directory, we give a target size of our image and a batch size and then we also set the class mode. So in this case in the data generator folder there's only one image. So ... let's have a look. There is only one class, class zero one sub folder and if we print what's inside there there's only one image with a squirrel. So ... we can now ask to the training generator flow from directory for the next image and this would be a random variation of the squirrel image that is generated starting from these deformations. So I've asked for 16 random deformations of the squirrel and I'm plotting them in a grid subplot. 

So you see these images have been rotated, have been compressed, the scale ratio has been changed and sometimes they've been sheared like this case and notice that beyond the border the pixels are simply extended in the natural direction. So as you can see like there are these lines it is just the last pixel on the border of the image that gets extended perpendicular to the image. This guy is flipped, this guy and these guys are flipped too. So yeah as you can see this is zoomed out this is zoomed in. So it's a combination of many transformation at once but it should be very clear that all these images represent a squirrel and our Neural Network should be able to recognize that. So I hope you can see the power of this type of technique where you augment the data you have with plausible deformations it really has been used as a breakthrough in both image recognition and voice recognition. In the exercise we will experiment with this on a real-world data set. So thank you for watching and see you in the next video.

About the Author

Students745
Courses8
Learning paths3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.