- Home
- Training Library
- Big Data
- Courses
- Getting Started with Deep Learning: Introduction To Machine Learning

# Cost Function

## Contents

###### Machine Learning

## The course is part of this learning path

**Difficulty**Beginner

**Duration**2h 4m

**Students**373

**Ratings**

### Description

Machine learning is a branch of artificial intelligence that deals with learning patterns and rules from training data. In this course from Cloud Academy, you will learn all about its structure and history. Its origins date back to the middle of the last century, but in the last decade, companies have taken advantage of the resource for their products. This revolution of machine learning has been enabled by three factors.

First, memory storage has become economic and accessible. Second, computing power has also become readily available. Third, sensors, phones, and web application have produced a lot of data which has contributed to training these machine learning models. This course will guide you to the basic principles, foundations and best practices of machine learning. It is advisable to be able to understand and explain these basics before diving into deep learning and neural nets. This course is made up of 10 lectures and two accompanying exercises with solutions. This Cloud Academy course is part of the wider Data and Machine Learning learning path.

**Learning Objectives**

- Learn about the foundations and history of machine learning
- Learn and understand the principles of memory storage, computing power, and phone/web applications.

**Intended Audience**

It is recommended to complete the Introduction to Data and Machine Learning course before taking this course.

### Resources

The dataset used in exercise 2 of this course can be found at the following link: https://www.kaggle.com/liujiaqi/hr-comma-sepcsv/version/1

### Transcript

Hey guys, welcome back. It this video I will show you how to load some data, plot it with a scatter plot, and calculate the cost function with respect to a straight line. So, as usual, we start by importing the packages that, by now, you should know very well. Matplotlib inline to display the plots, and then matplotlib.pyploy, pandas, and numpy. Then we load the file weight-height. It's the same file we used in the previous exercise, so no surprise. Check. The same thing. We have gender, we have height, and weight of people. Then we do a scatterplot, and this you've seen also in the previous lecture, so I won't go too long. We just need to use the plot method from the DataFrame object, and set the kind of plot to scatter. Tell the plot which x and y variables we are going to use, and set a title. So, here we are going to also add a line to the plot. And this is a line we've drawn by hand. So we've drawn the beginning point 55, 78, and the end point, and set the line width and color.

Okay, so this is not the linear regression. This is just the line I've drawn that seems to fit the data pretty well. But it's not calculated with any algorithm. Okay, now let's define a function called line that takes some values in x and performs what you've learned to be the equation that defines a line. So, multiplies x by the weight w, and adds the bias. These are both set to zero at the beginning. Now, notice we can define a space of 100 points, equally spaced between 55 and 80. And calculate yhat using our line function that we've just defined. This will take the linear space of 100 points, and multiply them by the weight, and add to them the biasses. So let's actually do these cells first, so we can check what we're doing. Let's define the function first. Okay, we've defined the function.

Define X. And look at it. Okay, so our array starts at 55 and it ends at 80. Great! And now, let's calculate yhat, and let's look at it. So yhat is equals to an array of zeros, and that's obvious because both w and b are zero. Notice that if I set, for example, b to one, I will have ones everywhere. Okay, cool. So yhat is this. Let's plot the function yhat as a function of x, and add it to our data. Okay, perfect, it's what we wanted. It's a straight line, passing through zero, with zero slope. Now, we are going to calculate the cost, which is the mean squared error, given by the residuals of these data points from this line. Obviously, this is not the best line, but we can still calculate the cost. So how are we going to do that? Well, first we define the mean squared error. This will take our true data, and our predicted data, which is the points of the line, the yhat. So this is y, and this is yhat. And it will take the difference, one minus the other, square it, save it in a temporary variable called s, and then calculate the mean of s. So that's the mean squared error. Perfect. We've defined the function, and now we pass two arrays. X to be the values of height, and y_true to be the values of weight. We're doing these values things so that X and y_true are now two numpy arrays, and not pandas objects. So, the value's attribute of a Pandas DataFrame, or a Pandas Series, returns a numpy array.

What I mean by this is, if we check y_true, it's actually an array. Perfect. Now, we can calculate y predicted by taking the line function, this function we defined above here, of the particular X values that define the height. Okay, so y prediction is: this. And we can check, this is also an array. Perfect. It's a column array, because X was a column vector. That's fine. But we can still calculate the mean squared error between the y_true and y predicted. Notice that our function will cast this to a row array. Alright, so, it calculates it. It will take a bit of time, and in the end we will get this number. Notice that we could have it evaluated faster by flattening this array, and the result would still be the same, just executed faster. Okay. Great. Your turn now. Try changing the value of w and b, so the weight and the bias. And recalculate the mean squared error. See what happens if you increase w, for example, or increase b, and try to make the line closer to the data. I'll see you in the next video. Thank you for watching.

# About the Author

**Students**3037

**Courses**8

**Learning paths**3

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.