image
Data Augmentation
Start course
Difficulty
Beginner
Duration
1h 2m
Students
1064
Ratings
4.7/5
Description

Move on from what you learned from studying the principles of recurrent neural networks, and how they can solve problems involving sequencing, with this cohesive course on Improving Performace. Learn to improve the performance of your neural networks by starting with learning curves that allow you to answer the right questions. This could be needing more data or, even, building a better model to improve your performance.

Further into the course, you will explore the fundamentals around bash normalization, drop-out, and regularization.

This course also touches on data augmentation and its ability to allow you to build new data from your starting training data, culminating in hyper-parameter optimization. This is a tool to that aids in helping you to decide how to tune the external parameters of your network.

This course is made up of 13 lectures and three accompanying exercises. This Cloud Academy course is in collaboration with Catalit.

 Learning Objectives

  • Learn how to improve the performance of your neural networks.
  • Learn the skills necessary to make executive decisions when working with neural networks.

Intended Audience

Transcript

Hello and welcome to this video on data augmentation. In this video, we will present a technique called data augmentation which is useful to generate more data when we cannot collect more data. To train a neural network, we usually need a lot of data. However, it can be very expensive to generate training data with labels. In this chapter, we explore a way to obtain more label data for free. Let's assume we have an initial training dataset of images. Our goal is to identify if the image contains a cat or not. If we shift the image by one pixel, the cat is still there. So our algorithms should still be able to find it. With a simple move, we had just multiplied the amount of data by four: one pixel to the right, one pixel to the left, one up, and one down. If we allow for more than one-pixel moves and for diagonal moves, we can quickly obtain 10 or more times the original dataset. This is a good start but there are other more interesting transformations that we can apply. For example, we can flip the image left to right.

As long as we're not trying to re-text in the image, you should provide perfectly valid training data. We can also rotate the image by a tiny amount or change the aspect ratio by a small bit. We can zoom out or zoom in. Notice how all these transformations will contribute to force the network to see a cat regardless of how big or how small or its orientation. We can shear the image, we can turn it into a black-and-white image, it's still a cat, and we can modify the colors a little bit by retaining the shape of a cat. Finally, we can add some random noise or we can even add small occlusions and deletions. As long as a cat is still visible, this should be recognizable by the network. Basically, we should apply any transformation to the image that would still produce an image of a cat and if a human still sees a cat, a machine should be able to see a cat too. In conclusion, in this video, we've introduced how to use image transformation to obtain more labeled data at no cost. Thank you for watching and see you in the next video.

About the Author
Students
9706
Courses
8
Learning Paths
8

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.