image
The ethics of AI
Start course
Difficulty
Beginner
Duration
1h 23m
Students
2650
Ratings
3.3/5
starstarstarstar-halfstar-border
Description

Machine learning is a big topic. Before you can start to use it, you need to understand what it is, and what it is and isn’t capable of. This course is part one of the module on machine learning. It starts with the basics, introducing you to AI and its history. We’ll discuss the ethics of it, and talk about examples of currently existing AI. We’ll cover data, statistics and variables, before moving onto notation and supervised learning.

Part two of this two-part series can be found here, and covers unsupervised learning, the theoretical basis for machine learning, model and linear regression, the semantic gap, and how we approximate the truth.

If you have any feedback relating to this course, please contact us at support@cloudacademy.com.

Transcript

Let's now then have a look at some of the ethical concerns around the use of AI. So the first issue here is one of mistakes and safety. And here is the principle. If the history of data that the machine has seen looks like, or is very, very, very, very similar to the data it is seeing when it solves the problem, then you're okay. If it doesn't, then you are seriously not okay. You are possibly deeply unsafe. So that's one. Ethical concern number two, so the origin let's say and quality of the historical data, which is sometimes called the training data, the data the machine is fed when it is specializing the algorithms it will use. The training date or the historical data. Right, what are the concerns of the origins of this? Well, here's the thing. Practitioners, experts, scientists are not always in control of the information the machine is considering when it is specializing itself. How might that be? Suppose a machine learning system considers social media posts to assess, let's say, the marketing quality of some user, to asses their political party, to sell them a product or to advise them about a decision. Suppose because the machine is considering social media data, I, as a malicious agent, create social media posts that I know the machine will consider, will see, will process. And I create them so as to bias, or prejudice, or disrupt its future operation. With this system you could cause all kinds of chaos in people's lives. So if we have let's call it expert control and good faith on the data, then we're probably okay. And if we don't, then maybe we're not okay. Issue three then, and the final issue we will consider, is the system is not making a mistake, operating with high quality information, and in fact making the correct diagnosis, the correct prediction, the correct operation, there still may be concerns with its use. So we can call these moral concerns. Let me give you some examples. Let me give you some examples. So there ones around profiling a person with what we may call protected characteristics or let's say controversial features of a person. Now I don't mean to say-- Another kind of concern here, a moral concern, is lets call it perhaps even the empathetic one or... the automated one. What I mean here is that there is a question around whether were it possible for a machine to deliver a diagnosis, a health diagnosis to a person, whether we'd want to do that in automated fashion. Suppose the diagnosis was cancer. Would we wish to live in a society in which people went to vending machines to discover that they had cancer? There are arguments for such a society, but there are a large number of moral arguments perhaps against such a thing. And those arguments are things really to do with the emotional character of human beings. That people can actually be more stressed, people can feel more isolated, more alone, more ill, from engaging with automated systems that make no consideration for their state of mental health, for their state of emotional health and physical wellbeing. So we could even say perhaps there's a third little issue here following on from that is too specialized. If we build a machine to solve one highly specific problem, by doing so we may end up seeing that the problem we have asked the machine to solve was not the problem we originally had, or even really have. It is one mere piece of the puzzle. And in fact a human being might have performed to say across the board. So we have profiling. We have the empathetic concern, which kind of amounts the same as too specialized. Let's also perhaps say we have the automation concern. So yes, it's too specialized. What about it being automated? Are there concerns about job losses here, about replacing people? For the entirety of human history every single technological innovation has replaced human beings on that activity in which it was used. The automated loom replaced hand stitching of clothes. But what we see every time that that replacement is made, is that demand for what can now be automated increases many fold. In other words if we go back to 1800, or to 1700, or even earlier, there is a global demand in the year 1700 for everyone to have multiple shirts, more than one or two, many, like we have today. But there is no capacity whatsoever to deliver on that demand. The wish is there. If you ask anyone in the planet in the year 1700, "Would you wish to have multiple shirts?" They would say, "Yes." But they can't have them. But when the automated loom was invented, suddenly everyone could have them. And because that demand existed, more people are employed to serve it now it was there, than were it lost in the jobs which disappeared because of the automation. And the reliable thing about human beings is that the demand is effectively infinite. If you can create a new technology then the things that new technology allows will be wished for by everyone one.

About the Author

Michael began programming as a young child, and after freelancing as a teenager, he joined and ran a web start-up during university. Around studying physics and after graduating, he worked as an IT contractor: first in telecoms in 2011 on a cloud digital transformation project; then variously as an interim CTO, Technical Project Manager, Technical Architect and Developer for agile start-ups and multinationals.

His academic work on Machine Learning and Quantum Computation furthered an interest he now pursues as QA's Principal Technologist for Machine Learning. Joining QA in 2015, he authors and teaches programmes on computer science, mathematics and artificial intelligence; and co-owns the data science curriculum at QA.