Driverless Cars are also Ethics-less

Original article was published by Keith Law on Artificial Intelligence on Medium

Driverless Cars are also Ethics-less

Ethical arguments regarding driverless cars requires us to confront the question over whether the computational theory of mind has any merit.

Photo by Michael Jin on Unsplash

As aspiring manufacturers work out the technical problems with driverless cars, many of us are questioning the ethical and legal problems. Many supporters of driverless cars falsely believe that the ethical and technical problems are one and the same, so that reducing glitches that would likely cause accidents is the only concern. They don’t consider an equally fundamental problem that confronts us when in the drivers’ seats of our cars we replace humans with computer technologies.

It should be pointed out in advance that the claim that driverless cars will one day reduce accidents overall by reducing human error is clearly hasty. This is because an equal or greater number of accidents could take place due to technological errors such as those already occurring; or worse, by purposeful hacking of computer systems. Further, not only is it near impossible to calculate the number of current accidents that are due to human error, it is absolutely impossible to calculate the number of accidents that have been averted due to human acuity. Yet, who among us hasn’t witnessed a potential accident that was avoided by a conscientious driver? It is likely that an alert person can recognize and account for other human drivers in ways that computer programs never will.

As an internet search reveals, many reduce the ethical problems of driverless cars to thought experiments like the infamous “Trolley Problem,” as if these get to the essential matter. The Trolley Problem is a dilemma whereby the conductor of a trolley rushing toward multiple people standing on the track must choose between accidentally hitting them, or turning the trolley onto another track whereby they would intentionally kill one innocent bystander.

The Trolley Problem is presented as a debate between utilitarian ethics that would advocate turning the trolley to save more people. This is against those who assert that we have a duty to not intentionally kill any innocent person, which would force the conductor to run over more people.

Advocates argue that if they can solve ethical situations like the Trolley Problem, then driverless cars should be allowed. The problem with this thinking is it misses a more basic condition of all ethical decision-making. The missing factor is that ethical decision-making assumes that the agent we want to hold responsible for actions must be capable of ethical reasoning in the first place.

The essential feature that the trolley has, and that driverless cars do not, is a human driver in possession of the mental faculties that render them capable of ethical decisions and actions. This includes understanding what it means to be responsible, and reacting ethically in various situations not merely those programmed in advance.

The fact that ethical actions depend on specific mental faculties forces us to ask of self-driving technologies whether they possess the mental life upon which moral reasoning is based. Framed another way, in order for us to hold whoever or whatever is driving a car responsible for vehicular manslaughter, it follows they must have the capacity for responsibility for their thoughts and actions while driving.

Whether we possess the faculties upon which ethical responsibility depends has been debated for eons, but current laws do reflect this capability. On the other hand, it is clear that current computer technologies do not possess this capability, thus cannot be held responsible for the Trolley Problem or any other ethical decision.

We say we can “teach artificial intelligence” as if it learns, which leads some to falsely allege mental life where there is none. The false assertion of mental life to computer programs, the computational theory of mind, was debunked decades ago by among others UC Berkeley Philosophy Professor John Searle. In his “Chinese room argument” Searle showed that no matter how successfully we can make a computer program respond to symbols — in his case responding to phrases in the Chinese language — this alone does not mean that the computer program communicates in meaningful ways.

In other words, a computer might be programmed to scan and respond to this editorial in a way that would fool us into thinking it is genuinely reading, which is the famous Turing Test. However, it is merely responding to prerecorded programs rather than understanding linguistic meaning as do the humans who genuinely do understand what they read.

Since current computer programs, no matter how technologically sophisticated, do not possess the mental life upon which understanding and meaning occur, it follows that they cannot posses the faculties that would render them ethical agents. If a computer program cannot be an ethical agent, we cannot judge it on ethical or legal terms for actions derived from it; therefore, computer programs should not be in the driver’s seat of anything that requires an ethical decision.

Maybe the best way to stop the oncoming trolley that is driverless cars has been handed to us by UBER when it responded with an out of court cash settlement to the death of Elaine Herzberg in Arizona by one of its test cars. In this and other cases those in charge have failed their first ethical test by allowing trials on public roads and unwilling people, as there have already been injuries and deaths. To put this into perspective, we don’t allow pharmaceutical companies to test drugs on patients who are not willing participants.

If manufacturers of the technology are made legally and financially responsible for every mishap caused by driverless car, that likely will end the entire enterprise. Sadly, this will have been for economic rather than ethical concerns.