The course is part of these learning paths
To design effective machine learning, you’ll need a firm grasp of the mathematics that support it. This course is part one of the module on maths for machine learning. It will introduce you to the mathematics of machine learning, before jumping into common functions and useful algebra, the quadratic model, and logarithms and exponents. After this, we’ll move onto linear regression, calculus, and notation, including how to provide a general analysis using notation.
Part two of this module can be found here and covers linear regression in multiple dimensions, interpreting data structures from the geometrical perspective of linear regression, vector subtraction, visualized vectors, matrices, and multidimensional linear regression.
If you have any feedback relating to this course, please contact us at support@cloudacademy.com.
- In this section of the course we're gonna talk about the mathematics of machine learning. So basically an introduction to that mathematics, so the mathematics. And the way we're going to do that is really by considering a couple of problems. The first, linear regression, and the second, linear classification. So how can we solve the problem of finding a straight line model that allows us to predict, in the case of linear regression, an ordinary real number, in the case of classification, let's say binary classification, the positive and the negative cases, whether that be good/bad, yes/no, up/down, left/right, whatever the cases are. So this section is gonna consider hopefully as much of the non-statistical mathematics that's required for machine learning as much as that is possible, and hopefully gives you a good, broad overview of pretty much everything you need, or almost there. In terms of the statistics, we'll cover that as a separate module. So what is the mathematics that we need? Well, let's put statistics down anyway just as a kind of bullet point so that we know we need to know about it. Within that we could either consider probability a subfield of that, probability, or maybe we put it in mathematics or something else. Let's bracket that. Let's talk about the other kinds of things we need. So one thing we need is linear algebra. Algebra. That is the techniques, the mathematical techniques to talk about higher dimensional data, so that's data which has more than one column basically, so multi-column data, high dimensional data, and it's also a way of understanding certain kinds of operations that occur in that kinda complex dataset. So how to combine them, how to reshape them, how to restructure them, techniques of that kind would fall under the category of linear algebra. We'll talk more about that in a second. The other one is, let's call this calculus and optimization. So calculus is about the study of change, and in particular one way of thinking about that is if I change one function, how will another function change, that's one interesting thing, and here within the topic of ML the two functions of interest are the prediction function and the loss, so if I change my prediction function how will my loss change? And what we're trying to do, of course, in ML is optimize that, and that's where the term optimization comes in. So we're looking to somehow tune our prediction function until the total loss is optimal, is the best it can be, and so in this case it would be minimum. So that would be sort of minimizing, minimizing a function, and the way you do that minimization is using the tools of calculus. So these are the big topics, stats, linear algebra, calculus. Let's throw in there, just perhaps as a place to start, also a kinda 101, so you might think of these as a little bit more advanced areas of mathematics you need. Well, you also need kinda like 101 or beginner mathematics, which are considered to be maybe algebra, functions, equations, that sort of thing. Maybe you can throw in there like plots and charts. We might start kind of at the 101 level and then build up a little bit as we go. So where are we gonna begin with this one? Let's begin on linear regression, so I'm gonna give you two examples here, linear regression, and then when we've looked at that case we will look at a case of linear, let's do binary, classification. That will be example one and example two, and by looking at these two cases we can hopefully try and explore as much of the volume three stuff as we can get.
Michael began programming as a young child, and after freelancing as a teenager, he joined and ran a web start-up during university. Around studying physics and after graduating, he worked as an IT contractor: first in telecoms in 2011 on a cloud digital transformation project; then variously as an interim CTO, Technical Project Manager, Technical Architect and Developer for agile start-ups and multinationals.
His academic work on Machine Learning and Quantum Computation furthered an interest he now pursues as QA's Principal Technologist for Machine Learning. Joining QA in 2015, he authors and teaches programmes on computer science, mathematics and artificial intelligence; and co-owns the data science curriculum at QA.