image
Evaluating Model Performance
Start course
Difficulty
Beginner
Duration
2h 4m
Students
2114
Ratings
4.2/5
starstarstarstarstar-border
Description

Machine learning is a branch of artificial intelligence that deals with learning patterns and rules from training data. In this course from Cloud Academy, you will learn all about its structure and history. Its origins date back to the middle of the last century, but in the last decade, companies have taken advantage of the resource for their products. This revolution of machine learning has been enabled by three factors.

First, memory storage has become economic and accessible. Second, computing power has also become readily available. Third, sensors, phones, and web application have produced a lot of data which has contributed to training these machine learning models. This course will guide you to the basic principles, foundations, and best practices of machine learning. It is advisable to be able to understand and explain these basics before diving into deep learning and neural nets. This course is made up of 10 lectures and two accompanying exercises with solutions. This Cloud Academy course is part of the wider Data and Machine Learning learning path.

Learning Objectives

  • Learn about the foundations and history of machine learning
  • Learn and understand the principles of memory storage, computing power, and phone/web applications

Intended Audience

It is recommended to complete the Introduction to Data and Machine Learning course before taking this course.

Resources

The datasets and code used throughout this course can be found in the GitHub repo here.

 

Transcript

Hey guys, welcome back. Now we are going to test how good our model is by doing a train test split. So the train test split function is part of Scikit learning. Scikit learning is the package that contains all of the useful machine learning models that we can use in Python. So in some module called "model selection," we can import the function "train_test_split" so we're gonna do that, and then use it to split x and y true into training and testing x and training and testing y. The size of the test size is going to be 20%, and these are randomly sampled, 20% of the data. So let's check the sizes, x training is 8,000 and x test is 2,000, and same thing if we check y training is also 8,000. Okay, perfect. The next thing we are going to do is to reset the weights of our model. So we reset W, which was double arrayed, so we take the first number of the first row and first column, there is only one number but it's just doubly nested and we set it to zero, and we also set the only value we have for the bias to zero. Then we set the weights to be the new values of W and B. We do this because we want to train the model just on x train and y train so we are purposefully detraining, resetting the state of our model, of our linear regression parameters so that the model can learn again. 

Then we run fitting, notice that here I've set verbose equal to zero so it's not outputting any of those lines, but it's doing its calculation, and I've set the number of iterations to be fifty. So it will take a few seconds, but at some point it will finish and it will tell me that it's done. So, when that's done I will also run the cell, which is much faster where I predict the values for x train and call it "y train predicted," and for x test, and I call it "y test predicted." Notice that I decided to flatten those to single array instead of column arrays. That's just for the speed later. Also, I'm not going to use the mean squared error function I had defined above, but I'm going to use the one from scikit-learn metrics, and the reason for this is that this function is actually even faster than the one I defined above, so I import it as MSE, and then I print the mean squared error on the training set is, and here I calculate MSE of y train and y train predicted, and the mean squared error on the test set is, and here is the same calculation for the test set. So the mean squared error for the train set is 280, and the mean squared error the for the test set is also 280, which is great because it means that our model is generalizing pretty well, it's doing the same error on the training and on the test set. We can also calculate the r squared score, for both the training and the test set, and what matters here is not only how high your r square code is, and remember, the higher, the better, the closer to one, the better your fit is, but what is most important here is that the score for the training set should be equal to the score for the test set. That's a check that your model is able to generalize from the data it has been trained on to the new data it has never seen. So this was a supervised learning problem, a linear regression that we have performed using Carets. Thank you for watching, and see you in the next video.

About the Author
Students
8947
Courses
8
Learning Paths
8

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-­founder at Spire, a Y-Combinator-­backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.