image
Continuous Learning
Start course
Difficulty
Beginner
Duration
1h 2m
Students
1065
Ratings
4.7/5
Description

Move on from what you learned from studying the principles of recurrent neural networks, and how they can solve problems involving sequencing, with this cohesive course on Improving Performace. Learn to improve the performance of your neural networks by starting with learning curves that allow you to answer the right questions. This could be needing more data or, even, building a better model to improve your performance.

Further into the course, you will explore the fundamentals around bash normalization, drop-out, and regularization.

This course also touches on data augmentation and its ability to allow you to build new data from your starting training data, culminating in hyper-parameter optimization. This is a tool to that aids in helping you to decide how to tune the external parameters of your network.

This course is made up of 13 lectures and three accompanying exercises. This Cloud Academy course is in collaboration with Catalit.

 Learning Objectives

  • Learn how to improve the performance of your neural networks.
  • Learn the skills necessary to make executive decisions when working with neural networks.

Intended Audience

Transcript

Hello, welcome to this video on continuous learning. In this video, we will introduce the concept of continuous training or continuous learning. Data augmentation is the first step to continuous learning. Since we can apply many transformations, we don't actually need to generate the huge dataset obtained with all the transformations. We can start from a seed dataset of images with labels and loop over this set many times applying random small transformations before we feed the batch to the model. In this situation, we are continuously feeding small variations of our seed datasets to the model generating the modified data on the fly. As we've seen, this can be done for images, but it can also be done for sound. For example, by changing the pitch of a sound, or changing the speed of reproduction, or adding background noise. And even adding artifacts like background noises, traffics, people clapping and other sources of disturbance.

We can even add distortion, echo, and other disturbances and basically if a human can still distinguish the primary sound, for example, the voice, so should the machine learning algorithm be able to. These techniques are used to train all modern speech recognition engines for example. Text is harder to augment, but successful attempts included the use of synonyms generated from a thesaurus. Since we are generating data on the fly the concept of an epoch is not well defined because we do not finish the amount of training data. So the way to proceed is to define the size of a batch and then let the model train indefinitely until we reach a good performance or until performance doesn't improve any longer. This is why it's called continuous training. In this video, we've learned how to generate data on the fly with transformations and we've applied this technique to images, sound, and suggested that it can possibly be done for text too. This is called continuous training because there are no epochs. You continue to train the model until you're satisfied. Thank you for watching and see you in the next video.

About the Author
Students
9728
Courses
8
Learning Paths
8

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.