- Home
- Training Library
- Big Data
- Courses
- Getting Started With Deep Learning: Convolutional Neural Networks

# MNIST Classification

## The course is part of this learning path

## Contents

###### Convolutional Neural Networks

**Difficulty**Beginner

**Duration**1h 19m

**Students**43

### Description

In this course, discover convolutions and the convolutional neural networks involved in Data and Machine Learning. Introducing the concept of tensor, which is essential for everything that follows.

Learn to apply the right kind of data such as images. Images store their information in pixels, but you will discover that it is not the value of each pixel that matters.

**Learning Objectives**

- Understand how convolutional neural networks are essential to the fundamentals of Data and Machine Learning.

**Intended Audience**

- It is recommended to complete the Introduction to Data and Machine Learning course before starting.

### Transcript

Hey guys, welcome back. In this video, we're going to introduce convolutional neural networks. And as usual, we're going to start by loading the standard libraries, and then we are going to load mnist. So luckily for us, keras offers mnist as part of its collection of data sets, so all we've got to do is really import mnist from the data sets, and then load data. Notice that I've set it to download the data into a temporary folder, so it will be deleted when I close the computer, but you can set any path, and it will store it there. So let's check what we have downloaded. As you know by now, the mnist is a data set with 60,000 images in the training set and 10,000 images in the test set. Each of these images has 28 by 28 pixels. Let's display one image. Let's see what it looks like. This is the number five.

It's the first image in the set. I've plotted using this plt.imshow, which shows numerical data as an image. In fact, if we look at X_train zero, just the first image, we can see that it's something like this. Series of zeroes and numbers, okay? So where the numbers are, we have the white shape of the number five, and where the zeroes are, we have the black. Cool. So the next thing we're going to do is reshape our data so that the pixels are unrolled in a long vector, 784 numbers long. So we can check the shape of X_train now. And this is going to be the same number of data points, same number of images, but now we've reduced the shape to only be a two-dimensional object array, and we have 784 numbers in the sequence. We also convert the data from integers to floats, and we also divide, rescale by 255. So now our data is a long, long, long, long, long vector of numbers, mostly zeroes. And where it was between zero and 255, now it's between zero and one. Okay? So, let's go on.

From keras we import to_categorical. Remember, the problem we are trying to solve is the classification of images representing single digits. So we have 10 categories. Since it's a multi-class classification, we're gonna create the vector with 10 binary combs. Only one is going to be different from zero. So, just to check, let's see. Y_train of zero is the number five, and y_train_cat of zero, so the corresponding vector, is zero everywhere except zero one, two, three, four, five. It's not zero in the position number five and it's zero everywhere else. Alright. So what we're gonna do, now, is build a fully connected model to classify these images. By now, you should be pretty familiar with these lines. There should be no surprises. So I can probably run them without explanation. Only notice that I'm setting the input dimension to be 784. And then what we do is we just train the model. This model is going to take a bit of time to train, and so I'm going to cut the video and reconnect when it's done.

Feel free to let it run. I just want to point out that, already, the first iteration we have fully connected our accuracy is pretty high. So, great expectations for this data set already. Alright, our model finished training, and let's check what happened. We got to a pretty good validation accuracy. The model is probably over fitting because the train accuracy is higher than the validation accuracy. So what we can do is plot the history and see, yeah. The training accuracy goes almost to 100%, whereas the validation accuracy stays a bit lower. And, so, kind of harder to increase. We see that convolutional neural networks actually help us bump this validation accuracy even higher. Just to conclude, we check the accuracy on the test data that we've held out, and yes, we get 97.94, which is consistent with what we get here.

Okay, so we've trained a fully connected neural network model on the mnist data set for digit recognition. And we've got, already, a pretty good performance, but we are going to now build a convolutional neural network model in the next videos, and you'll see it's going to be super exciting. Thank you for watching, and see you in the next video.

### About the Author

**Students**748

**Courses**8

**Learning paths**3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.