Monday, November 13, 2017

It’s Time To Pull The Switch On The Trolley Problem

Back in August Germany became the first nation to institute federal guidelines for self-driving cars. These guidelines include criteria for what to do in the case of an impending accident when split-second decisions have to be made. Built into these criteria are a set of robust moral values, including mandating that self-driving cars will prioritize human lives over the lives of animals or property, and that the cars are not allowed to discriminate between humans on the basis of age, gender, race or disability.

Philosophers have an obvious interest in these sorts of laws and the moral values implicit in them. Yet in spite of the wide range of potentially interesting problems such technology and legislation pose, one perennial topic seems to dominate the discussion both amongst philosophers and the popular press: The Trolley problem. So electrifying has this particular problem become that I suspect I don’t need to rehash the details, but just in case, here is the basic scenario: a run-away trolley is hurtling down the tracks towards five people. You can save the five by throwing a switch diverting the trolley to a side track, where there is only one person. Should you throw the switch, saving the five and sacrificing the one, or should you do nothing, letting the one live and letting the five die?

Initially developed over 60 years ago by Philippa Foot and modulated dozens of times in the intervening decades, the Trolley problem was long a staple of intro to ethics courses, good for kick starting some reflection and conversation on the value of life and the nature of doing vs. allowing. Hence, you could practically feel philosophy departments all over the world jump for joy when they realized this abstract thought experiment finally manifest itself in concrete, practical terms due to the advent of self-driving cars.

This excitement fused with some genuinely fascinating work in the neuroscience of moral decision making. The work of scholars like Josh Green has provided genuine insight into what occurs in our brains when we have to make decisions in trolley-like situations. Out of the marriage of these two developments—along with some midwifery from psychology and economics—the field of ‘trolleyology’ was born. And it is my sincere hope that we can kill this nascent field in its crib.

Why should I, as an ethicist, have such a morbid wish for something that is clearly a boon to my discipline? Because despite the superficial appeal there is really not very much to it as far as a practical problem goes. It is marginally useful for eliciting conflicting (perhaps even logically incompatible) intuitions about how the value of life relates to human action, which is what makes it a useful tool for the aforementioned intro to ethics courses. But the Trolley problem does precious little for illuminating actual moral decision making, regardless of whether you’re in a lecture hall or an fMRI.

To see this, take a brief moment to reflect on your own life. How many times have you ever had to decide between the lives of a few against the lives of the many? For that matter, take ‘life’ out of the equation: how many times have you had to make a singular, binary decision between the significant interests of one person against the similar interests of multiple people? Actual human beings face real-world moral decisions every day, from the food choices we make and the the products we purchase, to the ways we raise our children and how we respond to the needs of strangers. Almost none of these decisions share the forced binary, clear 1-vs.-5 structure of a trolley problem.1

What then of the self-driving car example I opened with? Does this not demonstrate the pragmatic value of fretting over the Trolley problem? Won’t the ‘right’ answer to the Trolley problem be crucial for the moral operation of billions of self-driving cars in the future to come? In short, no. Despite all the press it has gotten, there is no good reason to think the development of self-driving cars requires us to solve the Trolley problem any more than the development of actual trolleys required it almost 200 years ago. Again, check your own experience: how often behind the wheel of a car did you—or for that matter anyone you know, met, or even read about—ever have to decide between veering left and killing one or veering right and killing five? If humans don’t encounter this problem when driving, why presume that machines will?

In fact, there’s very good reason to think self-driving cars will be far less likely to encounter this problem than humans have. Self-driving cars have sensors that are vastly superior to human eyes—they encompass a 360-degree view of the car, never blink, tire or get distracted, and can penetrate some obstacles that are opaque to the human eye. Self-driving cars can also be networked with each other meaning that what one car sees can be relayed to other cars in the area, vastly improving situational awareness. In the rare instances where a blind spot occurs, the self-driving car will be far more cognizant of the limitation and can take precautionary measures much more reliably than a human driver. Moreover, since accidents will be much rarer when humans are no longer behind the wheel, much of the safety apparatus that currently exists in cars can be retooled with a mind to avoiding situations where this kind of fatal trade off occurs.2

Both human beings and autonomous machines face an array of serious, perplexing and difficult moral problems.3 Few of them have the click-bait friendly sex appeal of the trolley problem. It should be the responsibility of philosophers, psychologists, neuroscientists, A.I.-researchers, and journalists to engage the public on how we ought to address those problems. But it is very hard to do that when trolleyology is steering their attention in the wrong direction.

Garret Merriam
Department of Philosophy
Sacramento State

[1] There are noteworthy exceptions, of course. During World War II, Winston Churchill learned of an impending attack on the town of Coventry and he decided not to warn the populace, for fear of tipping the Germans that their Enigma code had been cracked by the British. 176 people died in the bombing, but the tactical value of preserving access to German communications undoubtedly saved many more by helping the Allies to win the war. If you’re like most people, you can be thankful that you never have to make a decision like this one.

[2] For example, much of the weight of the car comes from the steal body necessary to keep the passengers safe in the event of a pileup or a roll-over. As the likelihood of those kinds of accidents become statistically insignificant this weight can be largely removed, lowering the inertia of the car making it easier to stop quickly (and more fuel efficient, to boot), thus avoiding the necessity of trolley-type decisions.

[3] Take, for example, the ethics of autonomous drone warfare. Removing human command and control of drones and replacing it with machine intelligence might vastly reduce collateral damage as well as PTSD in drone pilots. At the same time, however, it even further lowers valuable inhibitions against the use of lethal force, and potentially creates a weapon that oppressive regimes—human controlled or otherwise—might use indiscriminately against civilian populations. Yet a google search for “autonomous military drones” yields a mere 6,410 hits, while “autonomous car” + “trolley problem” yields 53,500.

11 comments:

  1. Garrett, thanks very much for this interesting post.

    I am inclined to the same conclusions you draw, but for different reasons. You're right that we never encounter a trolley problem while driving ourselves, but that is partly because we lack the perceptual, epistemic and logical capabilities of autonomous vehicles. In other words, the problem actually does arise, but we are typically unaware of it and would be unable to convert the facts into rational action if we were. You're also right that in virtue of those capabilities, autonomous vehicles will make their objective occurrence far less frequent. But, in the end, it is still a problem that will come up and we don't want cars freeze-framing on us if it does.

    My main worry here would be in allowing the trolley problem to delay for one minute the age of autonomous vehicles. Even if the trolley problem were quite a bit more frequent than you say, any solution to it would be vastly preferable to having even the most conservative, safety conscious humans behind the wheel.

    My other concern is that there is no objective correct answer to the (fat man) trolley-problem. The only value of such dilemmas is to show where moral questions cease to have answers that we mostly agree on and why (viz., that we have no generally accepted measurement procedure for moral value.) Consequently, all car manufacturers can do is capture some kind of "morally robust" result, which in reality is a lowest common denominator intuition. Germany is a nice case in point. If were designing these cars I'd prefer a codger like me to be killed before one young person is. But apparently ordinary people in a country top heavy with old people don't think like that.

    ReplyDelete
    Replies
    1. Hi Randy,

      I don't have a problem with programmers contemplating the possibility and programing a response. I think it will be almost non-existent, but yeah, given enough time it probably will happen. And since philosophers will debate anything, we can debate if they come up with the 'right' answer.

      My problem is with all the 'think-pieces' and general attention it's getting. It distracts from other concerns that are much more pressing.

      Delete
  2. Good post, Garrett. Lots here that I agree with. A couple of ideas. Currently, about 32,000 people a year die in car wrecks in the U.S. per year, 1.3 million worldwide. A lot of estimates concerning automated cars are that they could reduce that by 90%. Let me repeat that: 90% freaking percent. So even if for whatever reason, automated cars start encountering trolley style problems at some shockingly high rate compared to the past, and they make the "wrong" decision and kill one instead of 5, or whatever, the net gain in lives saved will be staggering. (We could do some math to figure out threshold for Trolley problem frequency where it equals the other lives saved by automated cars.) So yeah, they are really rare--rare enough to almost not matter at all in this policy question. I, for one, would much rather take my chances of getting killed by an errant, wrong-Trolley deciding autonomous car than the risk I'm currently running by the texting ape driving speeding, tail gating, and road raging next to me on the highway.

    ReplyDelete
    Replies
    1. It's one thing to see people argue we should take the short end of the stick when the long end is freely available. It's even worse to see them say we shouldn't take an end of the stick at all because they think it is cursed by demons. That's a bit of an exaggeration to the issue at hand, but not by much.

      Delete
  3. I agree, and it’s also worth pointing out that if you read between the lines, to the AI community the trolley problem is just a marketing problem packaged as an ethical one. Ultimately they just want to push autonomous vehicles with decision rules that are acceptable to the market. AI researchers are happy to program an utilitarian AI for you, and some studies have actually found that people in general think AVs should be utilitarian, on the assumption that utility is just a decreasing monotonic function of the number of casualties.

    So as far as they care, there really isn’t an ethical dilemma - these scenarios, as you pointed out, are too rare or non-existent to impact policy making, and when it does happen the AI can solve it in an utilitarian way that is more or less justifiable to the public. The problem is that they found out that people’s moral decisions do not coincide with their purchasing decisions - they found that people want utilitarian AVs, except they wouldn’t buy one themselves, especially if you tell them that the AI is allowed to sacrifice its passenger for the greater good. This makes sense - people don’t want to buy a car that is willing to sacrifice them, but they will happily let other people to be sacrificed by AVs for the greater good.

    Unless we push for heavy regulations, the automakers will get into a weird prisoners’s dilemma situation: if they all make utilitarian AVs, then overall things will be better for everyone, but then a car that will never sacrifice its owner will make a lot more money. It really is in this context that the AI community are interested in the trolley problem - they are trying to find a non-utilitarian way out of this mess. Unsurprisingly, Philosophers are pretty useless in this aspect, so what they are doing now is to use the trolley problem to (1) generate public interest and gauge public opinion, and (2) collect data on people’s judgment on these issues, with the hope that some sort of solution or consensus will emerge.

    ReplyDelete
    Replies
    1. Hi Lok,

      You're alternative take on the ethics here is certainly more interesting (if more subterranean) than the mainstream one I was considering. Nonetheless, I'm somewhat skeptical the problem you propose will matter much, either.

      For one, there are so many factors that go into deciding which car to buy I doubt the rather technical utilitarian v. non-utilitarian AI model will be something most consumers are conscious of, much less understand, much less think so much of that it will guide their purchasing choices. It's not as if car makers will be able to openly market that their car will kill pedestrians to save their passengers. But this prisoner's dilemma of which you speak is more likely to occur if journalists and philosophers keep fueling the irrational fire of the original trolley problem.

      Delete
  4. Garrett, I spend maybe 15 minutes on the Trolley Problem each semester. I agree with you that it is overly simplistic; life is far more complicated. I’m also all for autonomous cars.

    But I still think the Trolley Problem is worth 15 minutes of my time and my students’ time. I discuss the Trolley Problem in my ethics class to motivate objections to utilitarianism (e.g., that utilitarianism fails to pay attention to things that also seem to have moral significance). The Trolley Problem presents two contrived alternatives, but it allows us to apply a method of comparing and evaluating alternatives.

    Consider this scenario.
    Let y = the number of alternatives
    Let x = the number of morally relevant features (although this could be divided into multiple variables)

    Case 1: 2x2y (two alternatives, two morally relevant features, e.g., number of lives, social utility of each alternative).

    Case 2: 3x2y (three alternatives, two morally relevant features)

    Case 3: 3x3y (three alternatives, three morally relevant features, e.g., add the decision-maker’s intention).

    Case n: nxny

    The Trolley Problem addresses case 1. But the method of evaluating alternatives, paying attention to what is morally relevant for the theory in question, would apply to case 2, case 3, ...and case n. So, as long as the Trolley Problem is introduced as a heuristic that is deliberately simple for an ethics class discussion, I think it still has value. I often tell my students that utilitarian methodologies such as cost-benefit analyses are used by policy makers in determining the best policy alternative. Policy makers may consider every reasonable alternative and every foreseeable and relevant consideration, but we can’t possibly do the same in a 50-(or 15-)minute discussion.

    In short, the trolley problem is a useful teaching tool that should come with a disclaimer: life is messier. To your point, if some are relying on the tool beyond its usefulness, that may be a problem.

    ReplyDelete
    Replies
    1. Hi Chong,

      I spend maybe 15 minutes on the trolley problem in some of my classes, too. It does illustrate both utilitarian and anti-utilitarian intuitions and how they can easily be brought into conflict in the same person. Josh Green's work in brain-imaging people making trolley decisions is very enlightening. And yes, any time it is presented in a class it should be heavily qualified with 'life is messier.'

      I'm not so sure that it is a useful tool to set 'number of morally relevant features' or 'number of alternatives' as a variables in an equation, though. In thought experiments this might be manageable, but in real life we rarely are able to identify, much less count, all of the alternatives or morally relevant factors. If we frame the issue in such calculative language we get a false--even damaging--view of how we ought to approach moral decision making.

      Delete
  5. Hi Garrett,

    I like what you’re doing here.

    I also agree with your general concern about trolliology. While fun in some respects, it’s too shallow ethically to really motivate much more than generating some intuitions of the utilitarian vs deontological sort.

    I spend some time drinking beer and chatting Philosophy with a handful of automotive engineers, some of whom are working on different aspects of autonomous vehicles. We’ve talked about the trolley problem and it’s value in their deliberations. While I agree with you, that their interest is not primarily ethical, they are interested in the technology being acceptable. The popular ethical imagination around this development seems to be expressed in this trolly problem way, so it’s not like they’re ignoring it, nor should they.

    One thing they are finding, and therefore are interested in, is a slightly different take on the ‘sacrifice one to save the many’ problem. In the trolley problem, the moral agent determining the outcome is not herself otherwise a participant. That is, her only involvement in the problem is deciding to trigger the trolley to switch tracks or not. That’s the extent of the moral dilemma for her. She is not one of those whose lives will be sacrificed with the decision. Her action has consequences for others, life or death consequences. But for her, the problem presents only the consequence of a guilty conscience or not, of having done a wrong or not, of being morally culpable or not.

    Instead, the moral dilemma that seems to be raised by autonomous vehicles is not “hit the one pedestrian to avoid hitting the five”; rather, “avoiding hitting the five pedestrians at the cost of the passenger”...I.e., me riding along inside my autonomous vehicle. In this scenario, the agency lies not with me, the erstwhile driver, but with the AI. And, one of the lives at risk in the decision my AI driver makes is my own. Apparently, developers are finding that a strict impartial utilitarian-like minimize-the-harms-even-if-doing-so-results-in-harm-to-me calculus undermines support for autonomous vehicles. As one friend put it, “Would you want to buy a vehicle that was designed NOT to prefer your safety over that of others, but to count you as merely one among those who could be harmed?.... Right, I thought so...” This existential shift tests even the most committed utilitarian’s commitment to the principle of impartiality.

    This might be a more realistic scenario than those presented in the standard and variant trolley problems. In this conversation with my engineer friends, I was reminded of a real life scenario from my own past. My father told me a story about finding himself in a situation where staying on course would lead to a massive collision into the back of a stopped public transit bus or where swerving one direction would take him into head-on traffic and swerving the other direction would smash into the people waiting to board the bus. He took option one, totaling his car into the back of the bus. He survived, but was seriously injured. He was the only one injured. (He also lost his 1968 Mercedes 300SL.) However, he never regretted the decision, even when telling me the story while showing me his scars. An important ingredient in his lack of regret, I think, was that it was his choice. His calculation of the risk, and his life to put on the table. The thing about an autonomous vehicle, though, is that it makes the calculation for you.

    Would it be a more interesting ethical consideration if we rephrase the trolley problem:
    You find yourself tied to a trolley track, and see five other people tied to the other trolley track. The trolley is hurtling toward you. Do you want your AI driver to be the impartial agent at the switch?

    ReplyDelete
    Replies
    1. Hi Chris,

      I think the concerns your engineering friends bring up you are touched on by Lok's comment above and my reply thereto, so I'll not repeat it here. But your father's story jumps out at me, as it's the first real-world version of a trolley problem presented to me by a person I actually know. Every time I present the trolley problem I ask the audience if anyone has ever had to make a decision like this, or if they know anyone who does. Before now I've had a zero (out of at least a few hundred) positive response rate.

      But that does just underscore how rare these scenarios are. As a philosopher of course I'm always willing to consider hypothetical scenarios, no matter how bizarre or out there they are. But if what we're looking for is guidance in the real world, setting policy, or understanding what human beings actually care about when making moral decisions I don't think we really learn much by imagine ourselves in the trolley problem, regardless of whether we're on the track or next to it.

      Delete
  6. Not to be a retrograde curmudgeonly sort here, especially since I'm showing up pretty late to this party, but does anyone know whether the technology for autonomous vehicles has already been tried out on…y'know, actually TROLLEYS? Or trains? Maybe it could be beta-tested there (or already has been?) before, or while, we move on to planes or automobiles...

    ReplyDelete