Practical Machine Learning
The course is part of this learning path
To design effective machine learning, you’ll need a firm grasp on the maths that support it. In this module, we’ll introduce you to the mathematics of machine learning, before jumping into common functions and useful algebra, the quadratic model, and logarithms and exponents. After this, we’ll move onto linear regression, calculus, and notation, including how to provide a general analysis using notation. We’ll also cover how to use linear regression in multiple dimensions, interpret data structures from the geometrical perspective of linear regression and discuss how you can use vector subtraction. We’ll finish the module off by discussing how you can use visualised vectors to solve problems in machine learning, and how you can use matrices and multidimensional linear regression.
- So in vector subtraction, we kind of, we sort of reverse process in a way. Let's have a look. So vector subtraction. So if I have a vector, let's say, over here we can do it in red and blue so, you know, one's going that direction, and call that A, another going in this direction, call it B. So it turns out that when you run through the calculations if you do A minus B, let's do it in black, just for the sake of making it all the same formula, A minus B, and all I mean by that is just the components of A, so A one minus B one and, you know, comparing to A two minus B two. So it's a sort of intuitive thing you might think, if you know about vector addition, is to add two things together, that is subtraction, just subtract the two. Same components, same indices together so that's a subtraction formula, so geometrically, you can interpret this as adding the negative vector. So what does that mean? It means, okay, so I'm doing A, and then I'm adding just as we did before, the negative of B. What's the negative of B? Well it's B in the opposite direction, so if B's going this way, that's plus B, then that's minus B, so what we do is we take A, there's in red, we'll have to shift this across a bit, so we take A in red, and then we go to the tip of A as if we're adding a vector on the edge, right? And then, but we add negative B, there's B in the other direction. And so what we find is that we've actually traveled, so you see where we've traveled, from the origin, to... here. Now... it turns out that the width, the amount we've traveled, is actually the same as the amount from the tip of B to the tip of A, so it's... it's this travel here. So if we were to do A minus B we would land in this location, but the amount we've traveled, the distance we've traveled is actually just the distance between the tip of B and the tip of A. So often times when people are drawing vector subtraction, and they'll actually just say, that, you know, there's A, and the other one's B, that the difference is this vector here, they'll say, and technically speaking, it should be understood as having, you know, if it was anchored on zero, if the tip, if the base of the vector were anchored on zero, that you would actually land here, but, in the geometrical picture, we can just understand it as having this distance from the tip of B to the tip of A.
About the Author
Michael began programming as a young child, and after freelancing as a teenager, he joined and ran a web start-up during university. Around studying physics and after graduating, he worked as an IT contractor: first in telecoms in 2011 on a cloud digital transformation project; then variously as an interim CTO, Technical Project Manager, Technical Architect and Developer for agile start-ups and multinationals.
His academic work on Machine Learning and Quantum Computation furthered an interest he now pursues as QA's Principal Technologist for Machine Learning. Joining QA in 2015, he authors and teaches programmes on computer science, mathematics and artificial intelligence; and co-owns the data science curriculum at QA.