1. Home
2. Training Library
3. Machine Learning
4. Courses
5. Module 2 - Maths for Machine Learning - Part Two

# Vector Subtraction

5
Matrices
12m 22s

## The course is part of this learning path

Practical Machine Learning
11
6
3
Start course
Overview
DifficultyBeginner
Duration1h 32m
Students60
Ratings
5/5

### Description

To design effective machine learning, you’ll need a firm grasp of the mathematics that support it. This course is part two of the module on maths for machine learning. It focuses on how to use linear regression in multiple dimensions, interpret data structures from the geometrical perspective of linear regression, and discuss how you can use vector subtraction. We’ll finish the course off by discussing how you can use visualized vectors to solve problems in machine learning, and how you can use matrices and multidimensional linear regression.

Part one of this module can be found here and provides an intro to the mathematics of machine learning, and then explores common functions and useful algebra for machine learning, the quadratic model, logarithms and exponents, linear regression, calculus, and notation.

### Transcript

So in vector subtraction, we kind of, we sort of reverse process in a way. Let's have a look. So vector subtraction. So if I have a vector, let's say, over here we can do it in red and blue so, you know, one's going that direction, and call that A, another going in this direction, call it B. So it turns out that when you run through the calculations if you do A minus B, let's do it in black, just for the sake of making it all the same formula, A minus B, and all I mean by that is just the components of A, so A one minus B one and, you know, comparing to A two minus B two. So it's a sort of intuitive thing you might think, if you know about vector addition, is to add two things together, that is subtraction, just subtract the two. Same components, same indices together so that's a subtraction formula, so geometrically, you can interpret this as adding the negative vector. So what does that mean? It means, okay, so I'm doing A, and then I'm adding just as we did before, the negative of B. What's the negative of B? Well it's B in the opposite direction, so if B's going this way, that's plus B, then that's minus B, so what we do is we take A, there's in red, we'll have to shift this across a bit, so we take A in red, and then we go to the tip of A as if we're adding a vector on the edge, right? And then, but we add negative B, there's B in the other direction. And so what we find is that we've actually traveled, so you see where we've traveled, from the origin, to... here. Now... it turns out that the width, the amount we've traveled, is actually the same as the amount from the tip of B to the tip of A, so it's... it's this travel here. So if we were to do A minus B we would land in this location, but the amount we've traveled, the distance we've traveled is actually just the distance between the tip of B and the tip of A. So often times when people are drawing vector subtraction, and they'll actually just say, that, you know, there's A, and the other one's B, that the difference is this vector here, they'll say, and technically speaking, it should be understood as having, you know, if it was anchored on zero, if the tip, if the base of the vector were anchored on zero, that you would actually land here, but, in the geometrical picture, we can just understand it as having this distance from the tip of B to the tip of A.