CloudAcademy
  1. Home
  2. Training Library
  3. Big Data
  4. Courses
  5. Getting Started With Deep Learning: Convolutional Neural Networks

Exercise 2: Solution

Developed with
Catalit
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 19m
Students43

Description

In this course, discover convolutions and the convolutional neural networks involved in Data and Machine Learning. Introducing the concept of tensor, which is essential for everything that follows.

Learn to apply the right kind of data such as images. Images store their information in pixels, but you will discover that it is not the value of each pixel that matters.

 

 Learning Objectives

  • Understand how convolutional neural networks are essential to the fundamentals of Data and Machine Learning.

Intended Audience

Transcript

Hey guys, welcome back. In this video we're going to review exercise number two, where we're tasked to build a convolutional neural network model, capable of distinguishing between 10 different categories of images. These are color images, and as you see, this model will be not running very well on laptop, and so we need to run it on the GPU in the next chapter. So we gotta load the cifar10. Display a few images, check what they look like. Check the shape, and decide if we need to reshape it, do we need to rescale it? And y_train, do we need to reshape it? And the architecture of our model has to be like this. So, two convolution layer, one maxpool, and two convolutional layer, one maxpool, flatten, dense, and output. Compile the model, check number of parameters, and attempt to train it. 

So, should be pretty straightforward, we load cifar and load the images so we have 50,000 images, 32 by 32 pixels, and three color channels. So this is what they look like, you can probably see this is a truck, they don't look very nice at all because they're just 32 by 32 pixels, but it's probably possible to distinguish the object. So we definitely need to rescale them, to be between zero and one, and we need to also convert y_train to categorical. So y_train we have 10 categories for images, and we convert y_train and y_test to categorical. Okay so let's go with our model, first convolutional layer, input is 32 by 32 by 3, activation relu, padding the same, another convolutional layer, also with activation relu and we have 32 filters in the first, 32 filters in the second, the patch is three by three for each, MaxPool of pool_size two by two, then we have the second stack of filters, we have 64 convolutional, padding same, 64 convolutional again with three by three curdle, and the MaxPool. And finally we have our Flatten layer, our Dense 512, and our final out layer. 

So we compile the model, and let's check out the number of parameters, it's a model with 1,250,000 parameters, as you can see, most of them are still in the final Dense layer, at least a million and a hundred, and then we have a few other parameters in the convolutional layer. Now if we try to fit this model, you'll see that it goes really slow, and we don't need to continue because in the next chapter we're going to learn how to run the same model on a computer equipped with a GPU in the cloud, which will make it run much faster. So this exercise ends here, so thank you for watching, and see you in the next video.

About the Author

Students745
Courses8
Learning paths3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.