Setting The Record Straight About The Trolley Problem and Self-Driving Cars

Original article was published by Lance Eliot on Artificial Intelligence on Medium


Setting The Record Straight About The Trolley Problem and Self-Driving Cars

Dr. Lance Eliot, AI Insider

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Have you ever heard about the Trolley Problem?

It is a classic topic taught in classes about ethics and ethical dilemmas.

Turns out that the Trolley Problem is also considered one of the most controversial and outright fist-fighting topics in the field of AI autonomous self-driving cars.

If you mention the Trolley Problem to any industry insider, you’ll likely get one of two reactions. One response is by those that consider themselves as in-the-know gurus and will immediately discount the Trolley Problem as being entirely hypothetical and obtuse, looking at you askance as though you have naively fallen for some kind of scam or trickery. Others might concede reluctantly that it is an interesting topic for discussion, perhaps even worth seriously pondering, but otherwise not especially relevant to any day-to-day practical matters involving self-driving cars.

I’d like to see if we can give the matter its serious consideration and proper due.

To get us all on the same page, the place to start entails clarifying what the Trolley Problem consists of.

Turns out that it is an ethically-stimulating thought experiment that traces back to the early 1900s. As such, the topic has been around for quite a while and more recently has become generally associated with the advent of self-driving cars. In brief, imagine that a trolley is going down the tracks and there is a fork up ahead. If the trolley continues in its present course, alas there is someone stuck on the tracks further along, and they will get run down and killed. You are standing next to a switch that will allow you to redirect the trolley into the forking rail track and thus avoid killing the person.

Presumably, obviously, you would invoke the switchover.

But there is a hideous twist, namely that the forked track also has someone entangled on it, and by diverting the trolley you will kill that person instead.

This is one of those no-win situations.

Whichever choice you make, a person is going to be killed.

You might be tempted to say that you do not have to make a choice and therefore you can readily sidestep the whole matter. Not really, since by doing nothing you are essentially “agreeing” to have the person killed that is on the straight-ahead path. You cannot seemingly avoid your culpability by shrugging your shoulders and opting to do nothing, instead, you are inextricably intertwined into the situation.

Given this preliminary setup of the Trolley Problem as a lose-lose with one person at stake in your choice of either option, it does not especially spark an ethical dilemma since each outcome is woefully considered the same.

The matter is usually altered in various ways to try and see how you might respond to a more ethically challenging circumstance.

For example, suppose you can discern that the straight-ahead track has a child on it, while the forked track has an adult.

What now?

Well, you might attempt to justify using the switch to get the trolley to fork onto the track with the adult, doing so under the logic that the adult has already lived some substantive part of their life, while the child is only at the beginning of their life and perhaps ought to be given a chance for a longer existence.

How does that seem to you?

Some buy into it, some do not.

Some might argue that every person has an equal “value” of living and it is untoward to prejudge that the child should live while the adult is to die.

Some would argue that the adult should be the one that is kept alive since they have already shown that they can survive longer than the child.

Here’s another variation.

Both are adults, and the one on the forked path is Einstein.

Does this change your viewpoint about which way to direct the trolley?

Some would say that averting the trolley away from Einstein is the “right” choice, saving him and allowing him to live and inevitably offer the tremendous insights that he was destined to provide (we are assuming in this scenario that it is a younger adult moment-in-time, Einstein).

Not so fast, some might say, and they wonder whether the other adult, the one on the straight-ahead, maybe they are someone that is destined to be equally great or perhaps make even more notable contributions to society (who’s to know?).

Anyway, I think you can see how the ethical dilemmas can be readily postulated with the Trolley Problem template.

Usually, the popular variants involve the number of people that are stuck on the tracks. For example, assume there are two people trapped on the straight-ahead path, while only one person is jammed on the forked path.

Some would say this is an “easy” answered variant since the aspect of two people is presumed to be spared over just saving one person. In that sense, you are willing to consider that lives are somewhat additive, and the more there are, the more ethically favorable is that particular choice.

Not everyone would concur with that logic.

In any case, we now have placed on the table herein the crux of the Trolley Problem.

I realize that your initial reaction likely is that it is a mildly interesting and thought-provoking notion but seems overly abstract and does not offer any practical utility.

Some object and point out that they do not envision themselves ever coming upon a trolley and perchance finding themselves in this kind of obtuse pickle.

Shift gears.

A firefighter has rushed up to a burning building. There is a man in the building that is poking out of a window, acrid smoke billowing around him, and yelling to be saved. What should the firefighter do?

Well, of course, we would hope that the firefighter would seek to rescue the man. But, wait, there is the sound of a child, screaming uncontrollably, stuck in a bedroom inside the burning building. The firefighter has to choose which to try and rescue, and for which the firefighter will not have time to save both of them. If the firefighter chooses to save the child, the man will perish in the fire. If the firefighter chooses to save the man, the child will succumb to the fire.

Does this seem familiar?

The point is that there are potentially real-life related scenarios that exhibit the underlying parameters and the overarching premise of the Trolley Problem.

Remove the trolley from the problem as stated and look at the structure or elements that underpin the circumstances (we can still refer to the matter as the Trolley Problem for sake of reference, yet remove the trolley and still retain the core essentials).

We have this:

  • There are dire circumstances of a life-or-death nature (more like death-or-death)
  • All outcomes are horrific (even the do-nothing option) and lead to fatality
  • Time is short and there are urgency and immediacy involved
  • Options are extremely limited, and a forced-choice is required

You might try to argue that there is not a “forced choice” since there is the do-nothing option always available in these scenarios, but we are going to assume that the person faced with the predicament is aware of what is taking place and realizes they are making a choice even if they choose to do nothing.

Obviously, if the person confronted with the choice is unaware of the ramifications of doing nothing, they perhaps could be said to have not been cognizant of the fact that they tacitly made a choice. Likewise, someone that miscomprehends the situation might falsely believe that they do not have to make a choice.

Assume that the person involved is fully aware of the do-nothing and must choose to do nothing or to not do-nothing (I emphasize this due to the aspect that sometimes people mulling over the Trolley Problem will attempt to weasel out of the setup by saying that the do-nothing is the “right” choice since they then have averted making any decision; the selection of do-nothing is in fact considered a decision in this setup).

As an aside, in the case of the burning building, if the firefighter does nothing, presumably both the man and the child will die, so this is somewhat kilter of the Trolley Problem as presented, thus, it is perhaps more evident that the firefighter will almost certainly make a choice. It differs from the classic Trolley Problem in that the firefighter has the opportunity to always, later on, point out that the do-nothing was certainly worse than making a choice, no matter which apparent choice was ultimately selected.

One other point, this is not particularly a so-called Hobson’s choice scenario, which sometimes is misleadingly likened to the Trolley Problem.

Hobson’s choice is based on an historic story of a horse owner that told those wanting a horse that they could choose either the horse closest to the barn door or take no horse at all. As such, the upside is taking the horse as proffered, while the downside is that you end-up without getting a horse. This is a decision-making scenario of a take-it-or-leave-it style, and decidedly not the same as the Trolley Problem.

With all of the background setting the stage, we can next consider how this seems to be an issue related to self-driving cars.

The focus will be on AI-based true self-driving cars.

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

The AI is doing the driving.

Here’s the vexing question: Will the AI of true self-driving cars have to make Trolley Problem decisions during the act of driving the self-driving vehicle?

The reaction by some insiders is that this is a preposterous idea and utterly miscast, labeling the whole matter as falsehood and something that has no bearing on self-driving cars.

Really?

Start with the first premise that is usually given, which is that there is no such thing as a Trolley Problem in the act of driving a car.

For anyone trying to use the “never happens” argument (for nearly anything), they find themselves on rather shaky and porous ground, since all it takes is the showing of existence to prove that the “never” is an incorrect statement.

I can easily provide that existence proof.

Peruse the news about car crashes, and by doing so, here’s an example of a recent news headline: “Driver who hit pedestrians on sidewalk was veering to avoid crash.”

The real-world reporting indicated that a driver was confronted with a pick-up truck that unexpectedly pulled in front of him, and he found himself having to choose whether to ram into the other vehicle or to try and veer away from the vehicle, though he also realized apparently that there were nearby pedestrians and his veering would take him into the pedestrians.

Which to choose?

I trust that you can see that this is very much like the Trolley Problem.

If he opted to do nothing, he was presumably going to ram into the other vehicle. If he veered away, he was presumably going to potentially hit the pedestrians. Either choice is certainly terrible, yet a choice had to be made.

Some of you might bellow that this is not a life-or-death choice, and indeed fortunately the pedestrians though injured were not actually killed (at least as stated in the reporting), but I think you are fighting a bit hard to try and reject the Trolley Problem.

It can be readily argued that death was on the line.

Anyone of an open mind would agree that there was a horrific choice to be made, involving dire circumstances, and with limited choices, involving a time urgency factor, and otherwise conformed with the Trolley Problem overall (minus the trolley).

As such, for those in the “never happens” camp, this is one example, of many, for which the word never is blatantly wrong.

It does happen.

It is an interesting matter to try and gauge how often this kind of decision-making does take place while driving a car. In the United States alone, there are 3.2 trillion miles driven each year, doing so by about 225 million licensed drivers, and the result is approximately 40,000 deaths and 2.3 million injuries due to car crashes annually.

We do not know how many of those crashes involved a Trolley Problem scenario, but we do know that reportedly it does occur (as evidenced by news reporting).

On that aspect of reporting, it is quite interesting that apparently, we should be cautious in interpreting any of the stories and coverage of car crashes, due to a suggested bias by such reporting.

A study discussed in the Columbia Journalism Review points out that oftentimes the driver is quoted by news reporters, rather than quoting the victims that are harmed by the driving act (this is logically explainable, since the victims are either hard to reach as they are at a hospital and possibly incapacitated, or, sadly, they are dead and thus unable to explain what happened).

You might recognize this kind of selective attention as the survivability bias, a type of everyday bias in which we tend to focus on that which is more readily available and neglect or underplay that which is less so available or apparent.

For the driving of a car and the reporting of car crashes, we need to be mindful of this facet.

It could be that there are instances involving the Trolley Problem that the surviving participants might not realize had occurred, or are reluctant to state as so, and so on. In that sense, it could be that the Trolley Problem in car crashes is underreported.

Being fair, we can also question the veracity of those that make a claim that amounts to a Trolley Problem and be cautious in assuming that just because someone says it was, it might not have been. In that sense, we could be mindful of potential overreporting.

All in all, though, we can reasonably reject the claim that the Trolley Problem does not exist in the act of driving a car. Stated more affirmatively, we can reasonably accept and acknowledge that the Trolley Problem does exist in the act of driving a car.

There, I said it, and I’m sure some pundits are boiling mad.

Self-Driving Cars And Dealing With The Trolley Problem

Anyway, with that under our belt, we hopefully might agree that human drivers can and do face the Trolley Problem.

But is it only human drivers that experience this?

One can assert that an AI-based driving system, which is supposed to drive a car and do so to the same or better capability than human drivers, could very well encounter Trolley Problem situations.

Let’s tackle this carefully.

First, notice that this does not suggest that only AI driving systems will encounter a Trolley Problem, which is sometimes confusion that exists.

Some claim the Trolley Problem will only happen to self-driving cars, but it hopefully is clear-cut that this is something that faces human drivers, and we are extending that known facet to what we assume self-driving cars will encounter too.

Second, some argue that we will have only and exclusively AI-based true self-driving cars on our roadways, and as such, those vehicles will communicate and coordinate electronically via V2X, doing so in a fashion that will obviate any chance of a Trolley Problem arising.

Maybe so, but that is a Utopian-like future that we do not know will happen, and meanwhile, there is inarguably going to be a mixture of both human-driven cars and AI-driven cars, for likely a long time to come, at least decades, and we also do not know if people will ever give up their perceived “right” (it’s a privilege).

This is an important point that many never-Trolley proponents overlook.

Here’s how they get themselves into a corner.

The oft refrain is that an AI-based self-driving car has “obviously” been poorly engineered or essentially a lousy job done by the AI developers if the vehicle ever perchance finds itself amid a Trolley Problem.

Usually, these same claims are associated too with the belief that we will have zero fatalities as a result of self-driving cars.

As I have exhorted many times, zero fatalities is a zero chance.

It is a lofty goal, and a heartwarming aspiration, but nonetheless a misleading and outright false establishment of expectations.

The rub is that if a pedestrian darts into the street, and there was no forewarning of the action, and meanwhile a self-driving car is coming down the street at perhaps 35 miles per hour, the physics of stopping in-time cannot be overcome simply because the AI is driving the car.

The usual retort is that the AI would have always detected the pedestrian beforehand, but this is a falsehood that implies the sensors will always and perfectly be able to detect such matters, and that it will always be done sufficiently in advanced time that the self-driving car can avoid the pedestrian.

I dare say that a child that runs out from between two parked cars is not going to offer such a chance.

We are once again into the existence proof, meaning that there are going to be circumstances whereby no matter how good the AI is, and how good the sensors are, there will still be instances of the AI not being able to avoid a car crash.

Likewise, one can argue in that same vein that the Trolley Problem will be indeed encountered by AI self-driving cars, ones that are on our public streets, and traveling amongst human drivers, and driving near to human pedestrians.

The news report about the human driver that was cut-off by a pick-up truck could absolutely happen to a self-driving car.

This seems undebatable.

If you are now of the mind that the Trolley Problem can occur and can occur too in the case of AI self-driving cars, the next aspect is what will the AI do.

Suppose the AI jams on the brakes, and slams head-on into that pick-up truck.

Did the AI consider other options?

Was the AI even considering veering to the side of the road and up onto the sidewalk (and, into the pedestrians)?

If you are a self-driving carmaker or automaker, you need to be very, very, very careful about what your answer is going to be.

I’ll tell you why.

You might say that the AI was only programmed to do whatever was the obvious thing to do, which was to apply the brakes and attempt to slow down.

We can likely assume that the AI was proficient enough to calculate that despite the braking, it was going to ram into the pick-up truck.

So, it “knew” that a car crash was imminent.

But if you are also saying that the AI did not consider other options, including going up onto the sidewalk, this certainly seems to showcase that the AI was doing an inadequate job of driving the car, and we would have expected a human driver to try and assess alternatives to avoid the car crash.

In that sense, the AI is presumably deficient and perhaps should not be on our public roadways.

You are also opening wide your legal liability, which I have repeatedly stated is something that will ultimately be a huge exposure for the automakers and self-driving carmakers. Once self-driving cars are prevalent, and once they get into car crashes, which they will, the lawsuits are going to come flying, and there are lawyers already priming to go after those deep-pocketed billion-dollar funded makers of self-driving tech and self-driving cars.

Meanwhile, some of you might say that the AI did consider other alternatives, defending the robustness of your AI system, including that it considered going up on the sidewalk, but it then calculated that the pedestrians might be struck and so opted to stay the course and rammed instead into the pick-up truck.

Whoa, you have just admitted that the AI was entangled into a Trolley Problem scenario.

Welcome to the fold.

Conclusion

When a human driver confronts a Trolley Problem, they presumably take into account their potential death or injury, which thusly differs from the classic Trolley Problem since the person throwing the switch for the trolley tracks is not directly imperiled (they might suffer emotional consequences, or maybe even legal repercussions, but not bodily harm).

We can reasonably assume that the AI of a self-driving car is not concerned about its well-being (I don’t want to detract from this herein discussion and take us onto a tangent, but some argue we might someday ascribe human rights to AI).

In any case, the self-driving car might have passengers in it, which introduces a third element of consideration for the Trolley Problem.

This is akin to adding a third track and another fork.

The complications though somewhat extend beyond the traditional Trolley Problem since the AI must now take into account a potential joint probability or level of uncertainty, involving the facet that in the case of the pick-up truck involves the possible death or injury to the pick-up driver and the self-driving car passengers, versus the possible death or injury to the pedestrians and the self-driving car passengers.

Maybe that is the Trolley Problem on steroids.

Time for a wrap-up.

For those flat earthers that deny the existence of the Trolley Problem in the case of AI-based true self-driving cars, your head-in-the-sand perspective is not only myopic but you are going to be the easiest of the legal targets for lawsuits.

Why so?

Because it was a well-known and oft-discussed matter that the Trolley Problem exists, yet you did nothing about it and hid behind the assertion that it does not exist.

Good luck with that.

For those of you that are the rare earthers, you acknowledge that the Trolley Problem exists for self-driving cars, but argue that it is a rarity, an edge problem, a corner case.

Tell that to the people killed when your AI-based true self-driving car hits someone, doing so in that “rare” instance that will indisputably eventually arise.

Again, it is not going to hold any legal water.

Then there are the get-round-to-it earthers that acknowledge the Trolley Problem, and lament that you are so busy right now that it is low on the priority list, and pledge that one day, when time permits, you are going to deal with it.

There is little difference between the rare earthers and the get-round-to-it earthers, and either way, they are going to have quite some explaining to do to a jury and a judge when the time comes.

Here’s what the automakers and self-driving tech firms should be doing:

  • Develop a sensible and explicit strategy about the Trolley Problem
  • Craft a viable plan that entails the development of AI to cope with the Trolley Problem
  • Undertake appropriate testing of the AI to ascertain the Trolley Problem handling
  • Rollout when so readied the AI capabilities and monitor for usage
  • Adjust and enhance the AI as feasible to increasingly improve Trolley Problem handling

Hopefully, this discussion will awaken the flat earthers, and nudge forward the rare earthers and the get-round-to-it earthers, urging them to put proper and appropriate attention to the Trolley Problem and sufficiently preparing their AI driving systems to cope with these life-or-death matters.

It is a real problem with real consequences.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2020 Dr. Lance B. Eliot