Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Driverless cars – the moral maze

Dr Edmond Awad discusses the moral and ethical issues of driverless cars and what it means for the development of AI.

10 May 2021

The end of humans as car drivers?

The announcement last week that driverless cars could be allowed on UK motorways by the end of 2021 is another step towards vehicles being powered by AI and the gradual reduction of human input to travelling by car.

“10 million self-driving cars will be on the road by 2020,” announced a Business Insider article back in 2016.”

However, the reality in 2021 is that there is a still a long way to go before we can sit back and let the car do the hard work. In the UK, the move is likely to be confined to automated lane-keeping systems and allowed only when traffic is moving slowly.

Part of the issue is that the necessary technology is still not yet fully developed. Part of it is some reluctance by us humans after a century of using cars to hand over control to a computer.

Simply put, can you trust the safety of your loved ones, yourself, and other road users with Artificial Intelligence (AI)?

Cars can be lethal, and the question of what a driverless car will do in a split-second life or death decision is fundamental.

The Moral Machine?

This fundamental question informed our research and the development of the Moral Machine website.

The website uses gamification to crowdsource people’s decisions based on the trolley problem (the experiment used in ethics and psychology on whether to sacrifice one person to save a larger number). To date, millions of people in every country in the world have logged over 40 million decisions via the website, making it one of the largest studies ever done on global moral preferences.

Our objective was to understand people’s decisions on how driverless cars could prioritise different lives in the event of a collision. We tested nine different situations – should a driverless car prioritise:

  • humans over pets
  • passengers over pedestrians
  • more lives over fewer
  • women over men
  • young over old
  • fit over large
  • higher social status over lower
  • law-abiders over law-benders?
  • should the car swerve (take action) or stay on course (inaction)?
Moral Machine diagram
Image from https://www.moralmachine.net/

 

The results show that moral decisions across different countries vary considerably. For example, people in countries with weaker institutions are more tolerant of jaywalkers when compared to pedestrians who cross legally than those in countries with stronger institutions. In countries with high levels of economic inequality, people show larger gaps between the treatment of individuals with high and low social status.

However, there were some common themes. People tended to prioritise three of the nine attributes:

  • a preference to spare humans over pets
  • to spare more lives over fewer lives
  • to spare younger humans over older humans. 

How do we act on these results?

First, we are very clear that the results are not simply a guide to motor manufacturers on what decisions their computers and AI systems should follow. The results show peoples’ views and prejudices. Just because people report certain preferences doesn’t mean they make for wise or a fair policy. As humans we are inherently biased, and some of the preferences are worrying, such as the somewhat strong preference to spare a higher status person at the cost of a lower status person.

Second, the results do serve to illustrate the problem that there is no one overriding moral template that all driverless cars can follow. What you could see is that the programme development of a driverless car in China may differ to one in the United States, or perhaps the UK and France. This presents a whole conundrum of potential problems – imagine taking your car from one to another – would it be licenced? What about the issue of insurance? Who would really be liable in the event of a crash?

Third, this leads to the discussion of whether we need to develop more regional or even global standards for AI. Driverless cars are just one example of AI, and it is something that will be ever more pervading in our home, social and professional lives. AI can’t be kept neatly within borders, and there is a need for a serious discussion about the potential opportunities and challenges, but at a global level.

Fourth the research shows that there is a need for greater transparency around AI. We need to consider fully the moral and ethical dimensions, exploring further how all of us will react to the ethics of different design and policy decisions in AI applications. AI after all, can heavily reflect of the bias of the people developing it. With something as potently dangerous as cars, we need to be open about how it is designed, and consider what regulations are needed to guide its safe and responsible development.

The future?

There is no doubt that AI is an exciting development and from a technical perspective, there is something amazing about a driverless car. However, the moral dimensions need far more focus if we are to truly embrace this new technology and see the day when we can safely get behind the wheel and get taken for a drive.


Photo by Jakub Gorajek on Unsplash


Author

Dr Edmond AwadProfessor Alan Brown is a Lecturer at the at the University of Exeter Business School (Department of Economics) and the Institute for Data Science and Artificial Intelligence.

Start today. Change tomorrow.

If you would like to contact us to discuss potential study options, research partnerships or other collaborations then please complete the form below and we will be in touch.