Module 0 - What is machine learning?
Practical Machine Learning
The course is part of this learning path
Machine learning is a big topic. Before you can start to use it, you need to understand what it is, and what it is and isn’t capable of. In this module we’ll start with the basics, introducing you to AI and its history. We’ll discuss the ethics of it, and talk about examples of currently existing AI. We’ll cover Data, statistics and variables, before moving onto notation, supervised and unsupervised learning. Finally, we’ll end off by going into some depth on the theoretical basis for machine learning, model and linear regression, the semantic gap and how we approximate the truth.
- In this video, I would like to talk about Artificial Intelligence. The history, the uses, and some concerns we may have regarding the techniques of AI. So let's start with the term Artificial Intelligence. Perhaps, to give us some insight into what the term means, we should consider two ways a system may be artificial intelligent. So, the two ways are that the system may be weakly artificial intelligent, or strong, or strongly artificial intelligent. Of course, talking about computers as a side issue, what do we mean by computer? Well, we may think such a thing is intuitive, but it actually really isn't. It's actually very hard to define what a computer is. For our purposes, let's just say any system of algorithms or, system of rules. Any system which follows any kind of rule or argument. Okay, so what is it to be weakly, artificial intelligent, or, is it for computing to be that? Well, that's to behave as if you are a human in some specific condition. So, we're talking about systems that follow algorithms. And then the question for us now then, is how are these systems intelligent? What does it mean to be intelligent? So let's talk a little bit about what our goal is. Replicate or reproduce human performance on tasks where we require, let's say, creativity, problem solving and other faculties that we are typically regarded as being Faculties of Intelligence. We're trying to behave, the machine should behave as if it's human in some sense. Now back to weak and strong. So, something is weakly artificially intelligent, if it can solve a problem, if it can replicate human performance on a highly specific task. So weak AI is highly specialized to a specific problem, you ask, you know, you move from a system conveyor belts to a different system. You enter a different postcode that hasn't, you know, or, a different letter style that it hasn't seen before and the whole thing is unusable. What is the alternative? The alternative is to solve a problem generally. Solving, I would actually quite like to use the word here skill or skill acquisition. So, one way we could define strong AI is that it is where machine request skills. A more common way of defining it perhaps is this general so problem solving. The word general here, which is where if I get the machine to solve the remaining problem, what I would hope is that it can solve some other kind of problem Like, you know, the human being if I get them to solve one high specific problem, it isn't that somehow they're only able to do that, you know, exceptionally rare would that be the case because human beings acquire skills and they can solve small variations to problems too.
About the Author
Michael began programming as a young child, and after freelancing as a teenager, he joined and ran a web start-up during university. Around studying physics and after graduating, he worked as an IT contractor: first in telecoms in 2011 on a cloud digital transformation project; then variously as an interim CTO, Technical Project Manager, Technical Architect and Developer for agile start-ups and multinationals.
His academic work on Machine Learning and Quantum Computation furthered an interest he now pursues as QA's Principal Technologist for Machine Learning. Joining QA in 2015, he authors and teaches programmes on computer science, mathematics and artificial intelligence; and co-owns the data science curriculum at QA.