The history of AI
The history of AI
1h 23m

Machine learning is a big topic. Before you can start to use it, you need to understand what it is, and what it is and isn’t capable of. This course is part one of the module on machine learning. It starts with the basics, introducing you to AI and its history. We’ll discuss the ethics of it, and talk about examples of currently existing AI. We’ll cover data, statistics and variables, before moving onto notation and supervised learning.

Part two of this two-part series can be found here, and covers unsupervised learning, the theoretical basis for machine learning, model and linear regression, the semantic gap, and how we approximate the truth.

If you have any feedback relating to this course, please contact us at


- So we've been talking about the history of AI, because in many ways, this is the history of computer science, also, on the history of programming. Right? So the 1940s and '50s was the first generation of AI. And here, really, this was just the use of logic in programming. So we can think of this logic plus programming. So that with the development of algorithms, that had conditions in them, tests, and those tests could be sensitive to information, external of the machine. And the machine could make decisions based on that information. And that is indeed something that humans beings do with intelligence, perhaps. But it is, nevertheless, little too dumb for us today. So, for example, I could say, you know, if the age of the user is more than or equal to 18, we could allow them into the building, see? Otherwise, we could deny them. Now, this is very intelligent system, compared to human. Compared to the history of human inventions, you know. If you go back into the Victorian era, there is no system we can build to adapt to a user's information in such a way. So this is a real innovation. Right, okay, so you know, that was attempted but then it didn't quite work out. And so we have what was called an AI winter. So winter, do in blue, perhaps. And during the winter, you know the research projects dried up. So, what came next? The 1980s was when we sort of, another little Renaissance. And here there are multiple approaches being tried. One of the key ones, expert rules. And the idea here, or at least the thought here, was: that well, you know, maybe what was going wrong with the earlier approach, which was logic and programming, was that the tests being used didn't have the right structure. That maybe, if we asked experts about things, you know, ask the expert bike rider, "what rules do you follow?" And if we take those, perhaps we can build a machine which imitates the experts, you know, behaves as if human, with human performance, right? Or better than human performance. So when this approach seemed to be somewhat limited, we move on to a new one, which is machine learning. So again we have this winter, this AI winter. And then we have, in the late 2000s, really, you know, really, mostly, we have machine learning. It was invented much earlier. Although, to be honest, in many ways it was invented in the Victorian era or even earlier than that. Since the techniques are mostly just those of statistics, since mostly just the use of statistics, , to specialize the algorithms, the machine will follow. And therefore we're using statistics as the processor considering data, and finding out the key features of it. And then machine running the this, you know, this activity, you need large amounts of data to make these high quality rules. Let's now go and have a look at some of the ethical concerns of .

About the Author

Michael began programming as a young child, and after freelancing as a teenager, he joined and ran a web start-up during university. Around studying physics and after graduating, he worked as an IT contractor: first in telecoms in 2011 on a cloud digital transformation project; then variously as an interim CTO, Technical Project Manager, Technical Architect and Developer for agile start-ups and multinationals.

His academic work on Machine Learning and Quantum Computation furthered an interest he now pursues as QA's Principal Technologist for Machine Learning. Joining QA in 2015, he authors and teaches programmes on computer science, mathematics and artificial intelligence; and co-owns the data science curriculum at QA.