Original article was published by NuAIg.ai on Artificial Intelligence on Medium
Why self-driving cars must design “killing programming”
Suppose that in the near future, you are riding in a driver-less car, and suddenly something happened that caused the car to rush towards 10 pedestrians crossing the road. The vehicle could not brake in time, but it could turn to a wall next to it to avoid hitting the 10 pedestrians. However, crashing into a wall may cost you your life as the owner and passenger. What should it do?
When it comes to automotive technology, the first thing that comes to mind is the very popular self-driving cars. Many ordinary cars are equipped with some standard programs, including intelligent driving control, parallel parking programs, and even automatic overtaking commands. With these program commands, you can rest on a chair and let the computer drive, although you will feel a little uneasy.
Therefore, many car manufacturers have begun to develop cars that can drive fully autonomously. Such cars are safer, cleaner, and fuel-efficient than manual cars. But they cannot be absolutely safe.
Some more difficult problems arise from this. When encountering an unavoidable accident, how to program the reaction of an autonomous vehicle? Should self-driving vehicles try their best to minimize the loss of human lives, or even mean sacrificing the passengers? Or should it protect the people in the car at all costs? Should it randomly choose between these two extremes?
The important thing about the answers to these ethical questions is that it will have a major impact on society’s acceptance of autonomous vehicles. After all, who would want to buy a car that was designed to sacrifice its owner?
So can science and technology help solve this dilemma? Jean-Francois Bonnefon (Jean-Francois Bonnefon) of the School of Economics in Toulouse, France, and his colleagues conducted related research to find the answer. They believe that although there is no right or wrong answer to these questions, public opinion can determine how, or even whether autonomous vehicles should be generally accepted.
Therefore, they began to adopt the new scientific method of experimental ethics to understand the situation of public opinion. This process involves throwing ethical dilemmas directly to a large number of people and seeing how they answer such questions. The results of the survey were quite interesting, and they were expected. “Our research involves for the first time the problems caused by the ethical algorithms of autonomous vehicles,” they said.
Consider a dilemma like this: Suppose you own a self-driving car in the near future. One day, you were riding in this car, and suddenly something happened that caused the car to rush towards 10 pedestrians who were crossing the road. The vehicle could not brake in time, but it could turn to a wall next to it to avoid hitting the 10 pedestrians. However, crashing into a wall may cost you your life as the owner and passenger. What should it do?
One of the solutions to this problem is that autonomous vehicles should try to minimize casualties. According to this line of thinking, killing one person is better than killing 10 people.
However, this approach may bring other consequences. The number of people buying self-driving cars will become very small, because this kind of car is programmed to sacrifice the owner in the event of an accident. Then, road safety accidents will still happen as usual, and more people may die, because accidents caused by ordinary vehicles require Much more. The result is a dilemma.
Berniefen and his partners hope to find answers to this ethical dilemma from public opinion. They believe that the public is more likely to agree with situations that match their views.
So they asked hundreds of people working on Amazon Mechanical Turk (Amazon Mechanical Turk) research and development of these ethical dilemmas to understand their thoughts. The hypothetical situation faced by the interviewee is that if a car turns to a roadblock, the passenger or a pedestrian in the car will die, but the lives of one or more pedestrians will be saved.
At the same time, the researchers also provided some different details, such as how many pedestrians can be saved, whether the car turns to a driver or a computer, and whether the participant is a passenger or an unknown pedestrian on the road.
The results of the study are interesting, but they are expected. Under normal circumstances, people agree with the view that autonomous vehicles should be designed to minimize the number of deaths.
This utilitarian model is certainly commendable, but the participants are only willing to stop there. Berniefin and his colleagues came to this conclusion: “[Participants] don’t have much confidence in how self-driving cars are designed in reality. The reason is that they want other people to sit in utilitarian self-driving cars, and I would not buy such a self-driving car.”
There are also more complex issues involved. People agree with cars that sacrifice their passengers in order to save others-as long as they don’t have to drive them.
Berniefin and his colleagues pointed out that their research is still in its infancy, and more complex moral problems have not yet been involved. What needs to be dealt with in the future is the issue of uncertainty and blame.
Berniefen and his colleagues said that these issues will trigger more important issues: “If the possibility of surviving passengers on a car is greater than that of a motorcycle rider, is it acceptable for the self-driving car to turn to the wall to avoid the motorcycle? Should children make different decisions when they are in the car? Because, compared to adults, children can live longer. If car manufacturers provide different versions of ethical algorithms, and the buyer knows After choosing one of them, should the buyer be responsible for the consequences of the computer decision?
These issues cannot be ignored. The research team said: “When we empower millions of cars to drive autonomously, it is very urgent to attach great importance to the ethical issues of algorithms.”
Ref: arxiv.org/abs/1510.03346 : Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?