- Home
- Training Library
- Big Data
- Courses
- Getting Started With Deep Learning: Convolutional Neural Networks

# Convolutional Neural Network Continued

## Contents

###### Convolutional Neural Networks

## The course is part of this learning path

**Difficulty**Beginner

**Duration**1h 19m

**Students**116

**Ratings**

### Description

In this course, discover convolutions and the convolutional neural networks involved in Data and Machine Learning. Introducing the concept of tensor, which is essential for everything that follows.

Learn to apply the right kind of data such as images. Images store their information in pixels, but you will discover that it is not the value of each pixel that matters.

**Learning Objectives**

- Understand how convolutional neural networks are essential to the fundamentals of Data and Machine Learning.

**Intended Audience**

- It is recommended to complete the Introduction to Data and Machine Learning course before starting.

### Transcript

Hey guys, welcome back. In this video we're gonna build our first convolutional neural network. I hope you're very excited. I am excited to show you how to do it. First thing we gotta do is reshape our data so that it looks like an order for Tensor. So we got 60,000 points. 60,000 images. 28x28 pixels each. And one channel, one color channel. We load a couple of additional layers. And then we go. I clear the kernal session just for having less memory occupied. It's not strictly necessary. A new model will be created. Then I create the sequential model and start adding layers. So the first three layers we're gonna add are convolutional layer with 32 filters, each filter is three pixels by three pixels. And the input shape it expects is 28x28x1 which is the shape of the image. The second layer is a MaxPooling layer, two dimensional with a pool size of 2x2. So can you calculate what the image in output will be? So we started with 28x28 we convolved with the 3x3 we don't pad, we use padding valid. So the output of this convolutional layer is going to be a Tensor where the images have a height and width of 26x26. We lose a pixel on each side. And so since we apply pooling with a 2x2 after this they're gonna be 13x13. We apply a non-linear function, relu, and then we flatten the Tensor onto a longer rate for your dimensional.

That we plug into a dense layer, a fully-connected layer with an activation of relu. And then the final layer, the output layer, has 10 nodes because we have 10 digits. Activation softmax. We can compile and check what our model is. So we have our first convolutional layer. As expected it has number of samples times 26x26 by 32 filters in output. Our MaxPooling layer will cut the image in half by doing a 2x2 pooling. Still 32. Then we have the activation function. We could've done the activation here. It doesn't change at all to swap the pooling and the activation except if we do it after we do it on less pixels and so the calculation runs faster. So out of the activation function, the relu, we have 13x13x32 filters. So 13 pixels by 13 pixels by 32 filters. Or 32 output channels call it however you prefer. Then we flatten it so now we have the product of these three numbers. Fully connected to this next layer with 128. And then finally fully connected to this. So notice how the number of parameters is essentially driven by this fully connected connection. Initial number of parameters is actually pretty small. We don't have parameters for the pooling, we don't have parameters for activation.

So the bulk of the parameters comes from this connection here. We can fit the model with batch size of 128. Two epochs and verbose=1. You will see that on a laptop this will run pretty slow. The reason is the calculations involved in the convolutional part are actually pretty intense cause we have to run multiple products of our patch with all the image patches. And so this is when we are going to hit the limit of what's doable on a laptop and the next chapter we're going to go on the Cloud and run our calculations in the Cloud. However, as you can see in two epochs our model is already performing better than our fully-connected model. And if you keep training for more epochs it should get to even better. So let's wait for it to finish. And then let's evaluate on the test set and see how well it performs. Voila It's running on a test set. And we get 97.52. If you are not satisfied with this result you can just train it with more epochs and see how well it can predict on your test set. You can also change the batch size to try and make it converge a bit more. So congratulations, you've run your first convolutional neural network. It's an incredibly powerful technique to work with images. I hope you're excited and I'll see you in the next video.

# About the Author

**Students**1736

**Courses**8

**Learning paths**3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.