Why we need to teach robots to be good
Artificial intelligence is taking over more and more of the boring jobs in life. It is incredibly useful. But now that robots take so many decisions for us, they need to be moral as well.
Lord Evans, the chairman of the independent Committee on Standards in Public Life, has warned that “the public need reassurance about the way AI will be used”. Before it is widely used by the government, he argues, we need to be sure that the technology is accountable, open, and free from bias.
Find out more
Isaac Asimov, whose short stories inspired the film I, Robot, outlined a clear first rule of robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” But harm is vague, and there are many ways that AI can cause problems without injuring someone. There are also situations where someone is always going to get hurt.
Every day, programmers building self-driving cars grapple with ethical problems that have plagued philosophers for centuries. Should a car heading towards an obstacle risk the life of its driver, or should it veer to one side and risk killing the passengers of another vehicle?
Some scientists and philosophers are now calling for robots to learn how to be good. Could they be right?
No, robots are simply tools. A calculator does not need morals because it only ever does what we ask of it. Even an automated vehicle follows a set of predetermined commands. Just because some of these instructions might involve making decisions, does not mean that the artificial intelligence behind it needs to be moral.
Yes! As robots become more and more complex, it will become harder to understand the choices that they make. Unless we give them a sense of right and wrong, and teach them how to learn human ethics, then who knows what chaos they might bring? Any machine that has the power to make moral decisions should understand, at its simplest, how to be good.
- Would you want a self-driving car that put your life first or one that always acted in the interest of the general public?
- Imagine you are part of a team of scientists developing a human-sized robot that is supposed to look after young children. In groups, discuss and agree on a list of 10 moral rules you would teach it.
Some People Say...
“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”Eliezer Yudkowsky, US writer
What do you think?
- Committee on Standards in Public Life
- A group that advises the UK government and prime minister on ethical (moral) standards of public life.
- Artificial Intelligence, used informally, means the machines (or computers) that copy human intelligence, such as learning and problem-solving.
- Isaac Asimov
- (1920-1992) American writer and professor of biochemistry, famous for his works of science fiction.
- Operated mainly by automatic equipment.
- Established or decided in advance.
- Moral principles that shape a person’s behaviour or how something is done.