- Home
- Training Library
- Big Data
- Courses
- Getting Started With Deep Learning: Convolutional Neural Networks

# Convolutional Layers Continued

## Contents

###### Convolutional Neural Networks

## The course is part of these learning paths

In this course, discover convolutions and the convolutional neural networks involved in Data and Machine Learning. Introducing the concept of tensor, which is essential for everything that follows.

Learn to apply the right kind of data such as images. Images store their information in pixels, but you will discover that it is not the value of each pixel that matters.

**Learning Objectives**

- Understand how convolutional neural networks are essential to the fundamentals of Data and Machine Learning.

**Intended Audience**

- It is recommended to complete the Introduction to Data and Machine Learning course before starting.

Hey guys! Welcome back. In this video we're going to use a convolutional layer from Keras to perform a filtering operation on an image, and get familiar with how it works. First of all, we're going to load the Conv2D layer. Remember, the image we've loaded in the previous video has a shape of 512 by 512. Just so that you remember, this is the image we are dealing with. Plt.figure figsize five by five, and then plt.imshow img, colormap equal gray. And this is the image. Okay, so the first thing we gotta do is reshape our image so that it's four-by-four tensor. So, we're gonna have one sample, 512 pixels of height, 512 pixels of weight, and one color channel.

So, we shape the image, now the image_tensor has the shape and then we build the super simple neural network model with just one layer, one convolutional layer with just one node, a filter of three by three and strides of two and one. The input_shape it expects is 512 by 512 by one color channel because it's a gray scale image and we compile it with a certain optimizer but we actually will not train it, so this line doesn't really matter. Now, the model is compiled, it's not being trained by it's still able to proceed with the feed forward pass which essentially will take our image_tensor and perform a single convolution with the three-by-three kernel with a stride of two and one.

So, let's see what happens if we do that. It performed a convolution and they output the shape is exactly what we expect. In the direction along the axis where we had stride of one, we have convolved our filter and we've lost just two pixels at the border, so we went from 512 to 510 because the three-by-three kernel will lose one pixel on the left side, one pixel on the right side. On the other hand, in the vertical axis where we had a stride of two, we have divided by half and also lost one pixel. So, this is our new shape. We are going to extract the interior part from the tensor to get an image a two-dimensional image, we're gonna plot it and this is what we have. So, it's an image that has lost half of the length here but it's still the whole image, it's been filtered with a random weight. What are the weights? This is our kernel, the first remember element in the weights objects is the weight, so the kernel filter, it's a three-by-three by one-by-one, so one channel in input, one channel in output and three-by-three height and width of the filter. You can display it, just display the first two components or element zero, zero and this is what our filter looks like. Now let's create a new filter. Let's create a filter that is all ones. So, this filter is gonna be a flat window and basically it's going to take nine pixels that are nearby and do the average.

If we convolve this new filter with the image, it's going to do an average over eight pixels and slide it. So, we set the weights of our model to be our new weights and we recalculate our image_tensor by running a feed forward pass with model_predict and finally, we extract the image again and display it. So, again, our model still has a stride of two, so the image has a shorter side and a longer side but you can see the result it's basically a slight blurring of our image. If we run this again with the keyword padding equals same, our image will be padded with zeros and we will not lose the pixels on the borders and so, our final resulting tensor is going to have the exact same shape as the image. Notice also that the stride here is default, so the default stride is one, one, so it's one in both directions. So, here's how a convolutional layer works. As you can see, it's not magic, it's just doing the convolution between our weights, our filter and whatever input we pass. I hope this is starting to clarify how convolutional neural networks work, so I see you in the next video with more exciting convolutional neural nets.

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.