Sunday, February 26, 2017

The trouble with moral thought experiments

In last week's post Garret Merriam argued that the famous brain-in-a-vat thought experiment is incoherent.  In this post I argue that many popular moral thought experiments are flawed as well. I won't argue that they are incoherent; rather, I claim that they tend to presume and promote a flawed understanding of human decision-making.

So first a few words about that:

Human beings are social animals. We have learned to cooperate with one another in order to acquire goods that we can not easily secure in isolation.  In every human society adults are expected to do two things: (1) manage their personal affairs, and (2) respect the rules that make the benefits of cooperation possible.

On any given day we make thousands of almost entirely self-interested decisions.  Most are trivial, such as which word I should use to finish this taco. Some are more significant, such as whether to head for the beach or the mountains on Sunday. In each case I am just doing my best to figure out which of two options will deliver the greatest personal utility. (I am not saying that these actions have no moral significance, only that we do not typically make moral considerations when deciding whether to perform them.) We also, though less commonly, make decisions that are almost entirely moral in nature.  For example, I may be completely committed to helping you move to a new apartment, deliberating only on how I can be of greatest assistance.

But the more interesting decisions occur when both types of considerations are salient. The magic of well-organized societies is that they tend to support the same conclusion. When my alarm rings in the morning I haul my butt out of bed and drive to work. This is because it would be bad for me not to and wrong of me as well. Sometimes, however, these considerations support different decisions.  It might be morally better to help you on move day; still, it is shaping up to be beautiful outside and I would much rather go for a hike. In situations like this I have to decide whether to do what is right, or to do what I like.

When doing moral philosophy we sometimes wrongly suppose that whenever considerations of morality and self-interest come into conflict, we ought to do the morally right thing.  But this is incorrect. Of course it is tautological that morally we ought to, but we are not always expected to sacrifice our own interests for the benefit of others. Rather, when decisions like this arise, we weigh what we ought to do morally against what we ought to do prudentially, and make the best decision we can. This is easier said than done, especially since these two types of value are not obviously fungible. But it is our task, nonetheless.

Now for the problem with moral thought experiments:

Most moral thought experiments are intended to bring out a conflict between different ways of thinking about morality, typically between a utilitarian and a deontological approach. In the  trolley problem, e.g., it is first established that most people judge that one ought to pull a switch that would divert a runaway trolley so that it kills the fewest possible people.  Later we see that most of us also judge that one ought not to push a fat man off a bridge to precisely the same effect. Some philosophers argue that this shows that we are prone to making inconsistent moral judgments. Others claim that we must be detecting morally relevant differences between the two cases.

I don't think either of these conclusions is warranted. This experiment, and others like it, are flawed.

The flaw is that the hypothetical situation described in thought experiments like these are presented as if they constitute a purely moral decision. As noted above, these do occur in everyday life, but scenarios like the trolley problem don't approximate them.  Rather, they provide a decision in which considerations of self-interest and morality are both salient.

This is easily seen in the trolley problem. In each case there is a non trivial question concerning what is best for society as well as what is best for me. In the switch-pulling version, considerations of morality and self-interest more or less coincide. I calculate that pulling the switch is the best outcome for society and also the result I can live with personally. In the fat man version, these considerations collide. Sure, pushing the man off the bridge will save lives. But in the future I suspect I will suffer nightmares too intense to bear.

Some may respond impatiently: This is just the familiar sophomoric complaint that the thought experiment is unrealistic. All thought experiments are unrealistic, that is why they are thought experiments rather than real ones. Philosophers know that considerations of self interest play a role in real life, but we ask that you do your best to bracket these considerations in an effort to develop a clearer understanding of morality.

That is not good enough.

Just how are we supposed to bracket considerations of self-interest in this case? Are we asked to disregard our moral emotions altogether?  It is these, after all, that predict a future I wish to avoid. But to do that is to squelch one of our main sources of moral evidence as well. Alternatively, should we allow ourselves to pay attention to the moral emotions, but only for the purpose of moral judgment, taking care (a) not to let considerations of self-interest infect these judgments, and (b) not to confuse the best decision with the morally correct one?

Wow. I have never heard the trolley problem presented like that. It is is not at all clear that we have this ability. But if it could somehow be trained up, I'm betting we would end up with a very different data set.

G. Randolph Mayes
Sacramento State
Department of Philosophy

18 comments:

  1. Interesting stuff Randy. I consider a lot of the experimental work I do to be aimed at showing how thought experiments can be misleading. But, I also try to create new "clean" versions of the thought experiments, ones that help test a moral principle or answer a question about value in an unfettered way. A major problem with many thought experiments is that they mix moral and prudential motivations. I have often tried to eliminate moral considerations from experience machine scenarios (e.g. don't worry about your dependents, they can plug in too). Likewise in my trolley problem experiments, I try to eliminate prudential motivations. You might argue that breaking moral and prudential considerations apart is a bad idea, but not because it can't be done. Getting people to judge the morality of actions performed by others seems to reduce the impact of prudential considerations. Judging whether a hypothetical stranger with no dependants should use an experience machine also seems to elicit less moral judgments than trying to imagine you don't have any dependants yourself, and you are going to plug in. These methods might not be perfectly effective, but combining a few might make the judgments pure enough to use in reflective equilibrium / making all your moral beliefs consistent. But, like I said, you might think striving for this separation of moral and prudential is a bad idea. Bad because it makes the thought experiments so unrealistic as to make them irrelevant to our real life decision making. This would be a mistake, though. The point of thought experiments in normative ethics is not to help you realize what you should do if you are offered a spot in an experience machine or a chance to kill one villager to save 5 or 10 or 50. Those situations are already highly unrealistic. The point of these thought experiments is to test moral theories or principles. We can't easily use real life examples to test them because either commonsense morality agrees with all the theories (eg slavery) or there is no common view and the theories also disagree. Note that this is often different to many thought experiments in practical and applied ethics that use the argument from analogy strategy to argue that the action in question is morally permissible or not based on it's moral similarity with a real life agreed upon case (e.g. experimenting on nonhuman animals is like experimenting on humans with severe mental handicaps).

    OK, but why bother testing moral theories and principles anyway? Sometimes the moral and prudential choices we face are highly novel or complex (e.g. which national climate change policy should I vote for?) In these cases, anologies might be harder to draw. That leaves us with our gut feeling. The usefulness of (good) moral principles and theories is that they can be applied to these new issues, and potentially to challenge our gut feelings, perhaps precisely when our prudential impulses tell us that our Hummer is cool and climate change is a conspiracy.

    In summary, you should be nicer to normative ethicists. Considering their goals, they aren't doing that great, but their project has a point.

    ReplyDelete
  2. Hi Dan, thanks for the thoughtful comments. I think I am well within your camp here (except for the being nice part.) I definitely do not think it is a methodological error to try to separate moral and prudential considerations, and my narrow point about the standard trolley problem is just that it doesn't.

    That said, there are serious theoretical problems with separating these two types of considerations, namely that doing so can result in creating a bias toward or against one sort of moral theory. On a utilitarian calculus, e.g., your personal utiles count toward both types of judgment. So, from a utilitarian point of view, in any thought experiment that places the subject in the driver’s seat, mechanisms that prevent her from making prudential considerations will also prevent her from making moral ones. The same, of course, is true of some, if not all, subjectivist theories.

    I probably do not disagree with your view about what these moral thought experiments are for. I take your point that they are not specifically for finding out what to do in these bizarre situations, though I’m sure you will agree that if we are really learning anything, then they ought to help us in such situations if we are ever unlucky enough to be in one. I think you will probably agree with me that there is a lot more to be said to clarify the suggestion that these thought experiments are testing moral theories or principles. They aren’t testing them in anything like the way we test scientific theories. What we are really testing, in my view, is whether we endorse their implications, and this gets us back to the problem I am raising. Endorse them how?

    ReplyDelete
    Replies
    1. Yes, I agree where you suspect I do.

      The testing of moral theories and principles done wrong is just seeing whether our initial intuitions agree with them in various unrealistic cases. Doing it right means making ones own beliefs consistent; having your carefully chosen favorite moral principles agree with your carefully considered beliefs about what is right or good in various cases (realistic and unrealistic).

      Delete
  3. I agree with you about the kind of animals we are. We devised moral rules as a way to mitigate conflict and effect cooperation. These things allow us to better meet our individual ideals because they make peaceful society with others possible. But that isn't what motivates our adherence to them or figures into our reasons for following them once a norm has the status in a community of a moral one. That's close to what it means for it to be a moral one. It's non-instrumental.

    So, I would put a few of the things you say differently. Yes, we're not always expected to sacrifice our own interests for the benefit of others, but that's because (or, when that's true) we're not morally required to do that. That's very different than thinking that the normative weight of moral rules is comparable to that of prudential considerations (or aesthetic ones, or gastronomic ones, etc.) and might be trumped.

    I think this helps me resist the idea that the conflict in the trolley problems is between what's best for me vs. society. It really is about what the moral thing to do is.

    ReplyDelete
    Replies
    1. Thanks Kyle. I'm not sure how much we disagree, but maybe some. I think my good and the social good are both intrinsically good. The rules we follow in attempting to achieve these are instrumentally good, i.e., they are instruments for achieving intrinsic goods. I think decent folk value both goods intrinsically, and egoists value moral rules only to the extent that they satisfy their own prudential aims (which is why they defect and ruin everything.)

      I actually don't think it's obviously correct to say, as you do, that the reason we're not expected to sacrifice our interests for the benefit of others is that we are not morally required to do so. I think that is just a way of insisting on a framework in which moral considerations always trump personal interests. I think it's clear that most people don't really think that (even if they say they do.) But I also think that most people would agree with you.

      Delete
    2. I'm not sure how much we disagree either. Your good and the social good are both intrinsically good, but we shouldn't just assume that morality is simply about promoting things that are intrinsically good. Moral rules more typically identify goods that are to be respected and so condition what we can legitimately do to promote intrinsically good things (or things we regard as such). Those are the kinds of rules you'd expect us to hit upon, given that they're the ones we'd need in order to avoid conflict (and the resulting losses) in a society.

      Delete
    3. Totally agree with that. Morality promotes specific goods everyone values by restricting the goods individuals can pursue for themselves, or at least the way they may pursue them.

      As far as the post is concerned, all I am really saying is that there are times in the tug of war between self-interest and morality where most morally decent people will choose do the self-interested thing. But there is not much upside to representing or even understanding yourself as a conscientious defector, so in most of these situations we tend to rationalize the self-interested choice, to ourselves and to others, as the moral choice. This is especially likely for those situations in which we predict that doing the right thing actually makes us feel worse about ourselves than doing the wrong thing.

      I think thought experiments like the trolley problem are insensitive to this fact, and as a result the data collected in this way are not very interesting from the point of view of testing moral theories.

      Delete
    4. I think that's really interesting.

      Delete
    5. Randy, Very interesting account of what thought experiments like the trolley problems are all about. But I'm not entirely convinced that the central tension is between what we think is morally right and the pull of self-interest--at least not always. Take this example.
      The sea is flooding a rocky coast. 5 people are trapped in a place where they will soon drown. Time is of the essence. You are driving to rescue them. But on your way you see another person, also trapped, calling for help. If you stop and help her, the other 5 will drown.
      Most people will say that you should leave the one person to drown and save the 5. But here's a second scenario. The only way you can save the 5 is to drive over and crush to death someone who is lying on the road (they're tied up, fastened down, and here's no way around them). Phillips Foot (arguing against active euthanasia) argues--or rather, asserts-- that it would be morally wrong to drive over this person to save the 5.
      In this case, she would say her claim has nothing to do with self interest. The tension is between two moral principles or intuitions. And I rather suspect she'd say the same about the fat man version of the trolley problem.

      Delete
    6. Em, thanks for the comment, challenging and thoughtful as always.

      I guess I see this as belonging to the same family as the trolley problem, just as you indicate Foot would, and I think the considerations are identical. Crushing one to save five feels the same as pushing the fat man off the bridge to me. I just couldn't live with it, even knowing that it would be the morally right thing to do. So I would put this in the category of a situation where self-interest trumps morality.

      Your example helps to clarify what I am saying here, though, and maybe flush me out into the open in ways that make it easier to simply disagree with me. I suspect you might simply reply: Look, aren’t you just stipulating that the morally right thing to do is given by the utilitarian calculus? The whole point of these thought experiments is to test our moral conceptions. In these cases we are testing utilitarianism by showing that it makes a prediction that we do not morally endorse.

      My general reply to this is that if we are looking for a moral theory that satisfies all our pre-theoretic ordinary moral intuitions, then we are involved in a foolish project. This is because our moral intuitions are triggered by our moral emotions, which are crude responses that have evolved to reinforce cooperative behavior in simple social situations. They are, in other words, quick and dirty heuristics. If we want to get beyond this, and develop a normative framework that is clarifying, then we need to be much more skeptical of the deliverances of these sorts of thought experiments.

      My proposal is to recognize that our moral emotions are delivering information relevant to moral calculations as well as self-interested calculations, e.g., what it is going to be like for me to live with having deliberately run over an innocent person. If we recognize this, then we can also recognize the possibility that what we think is a moral decision is primarily a self-interested one.

      However, I am also adding to this as a separate claim that this is sometimes ok. What I mean by that is that neither morality nor egoism provide an adequate normative framework for human beings. Our views about what we ought to do in any given case arise from a competition between moral and self-interested calculations, and what these thought experiments really show is that there are times when the self-interested decision is the normatively defensible one.

      I lay this down as a challenge in the sense that I realize that we do not currently have an intelligible framework for weighing self-interest against morality. This is why we have people arguing that it has to be always one or always the other. There are plenty of people who will simply reply that this is an incoherent demand. I say, maybe it is in terms of our current framework. But I think the framework is due for revision, and that's what philosophers are supposed to be good at.

      Delete
    7. Randy, is Singer's principle -- that if it's in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it -- a moral intuition? If not, what is it?

      Delete
    8. Kyle, I would distinguish between an intuition and a principle, but I know you would, too. An intuition is a feeling and a principle is a standard. But if I understand your point, it is: Don't we ultimately rely on our intuitions regardless of what moral theory we advocate, to test such principles?

      Yes, I think so. I think knowledge progresses on the basis of intuitions, but that the intuitions are highly defeasible, and that we must be prepared to override them even when they are recalcitrant.

      We rely on intuitions in science, too. But science has only progressed because we are able to recognize individual intuitions as illusory. This usually occurs when a particularly powerful explanatory model emerges which implies this. The intuition doesn’t go away- we all still feel that the earth is immobile, that humans aren’t animals, that true love lasts forever, and that liars are easy to spot- but educated people just don’t accept these as an indication of the nature of reality.

      So my point here to Emrys is just that, like our physical intuitions, our moral intuitions are the expression of a motley set of behavioral heuristics, which will produce unreliable and contradictory results when forced into service outside of their domain of application. If our overarching aim is to rationalize human behavior- by which I mean develop a framework that helps us to negotiate the demands of self-interest and morality in a way that most of us find acceptable- we have to be prepared to override individual ones, and especially those which we have independent grounds for doubting

      Delete
  4. I would like to validate my existence by offering input on this article (And I want some extra credit). From what I can understand…

    Your definition: Moral thought experiments are intended to bring out a conflict between different ways of thinking about morality (typically between a utilitarian and a deontological approach).

    The Argument:
    - Hypothetical situations described in these experiments are presented as if they constitute a purely moral decision.
    - Instead, they provide a decision in which considerations of self-interest and morality are both salient.
    - Since they are presented inaccurately, moral thought experiments are flawed.

    So here’s my problem. In order for that argument to be sound, the purpose of these moral thought experiments must be well-defined. I’m not so sure that the word “flaw” rightly fits this situation, for it is my belief that this perceived flaw is necessary for these experiments to accomplish their purpose. Let’s take the Trolley Problem as the example; I see two possible ways this experiment can be perceived:

    1) Utilitarian Test
    The parameters of the situations are described as if they are purely moral. As a utilitarian test, this would be required because a utilitarian, by nature, does not care for the different circumstances; 1 death saves 5 lives. So the true purpose of this experiment can be viewed as a way to test whether our decisions are utilitarian. Since the majority of people were in conflict between the two possible answers, we can deduce that we, as a whole, do not judge in accordance to utilitarian. So under this premise, the Trolley Problem isn’t flawed at all, since the perceived flaw was intentionally there to teach us something about reality.
    You mentioned being able to endorse their implications. In this case, say an overwhelming majority of people find conflict between the two answers. We could deduce that we, as a society, do not make decisions in accordance to utilitarian. The implication here is that we learned something, which would lead to the possibility of a different moral theory. Perhaps Double Effect Principle?
    Additionally, this premise can be applied to any similar moral thought experiment, such as the “Jim and the Indians” linked above.

    2) The Other Viewpoint, (“Cut-off” Method?)
    I really can’t come up with a name for this one, so I’ll try to portray it with an example from Groucho Max:
    Groucho asks a women “Would you sleep with me for $1,00,000?”
    She replies “Of course I would!”
    Then he asks “How bout for $40?”
    Angrily, she responds “What kind of woman do you think I am?”
    Groucho “We’ve already established what kind of woman you are, now we are just haggling over the price.”

    The point I am trying to make here, is that the first scenario is used as a way to establish whether or not someone deems it morally acceptable to kill one man to save five. The second scenario is designed to find the moral “Cut-off” so to speak. Where does a person, or a group of persons, draw the line between self-interest and morality. If this is indeed the purpose of the trolley problem, then the perceived flaw is, once again, completely necessary.

    I guess what I’m trying to say is that when it comes to moral thought experiments, the flaws make them flawless. If we were really presented with two options, both equally and truly moral, then the experiment would lack a purpose, which would, in turn, lead to option 3:

    3) Face Value (Sarcasm)
    Scenario 1 is easy because I have nothing to lose. Scenario 2 sucks because now I have something to lose. This experiment is flawed.

    ReplyDelete
  5. After reading G. Randolph Mayes passage about moral thought experiments being flawed, I will have to say that I agree with him. I also believe moral thought experiments don’t take into account the choice between of self-interest and morality. He explains that one can make one of two decisions, the right and moral thing to do and what you would like to do. In the case of the morality, the action the produces more good is the right decision. On the other hand, doing what gives you more pleasure can also be the right choice of well. This then creates an inconsistency were the answer can be two different solutions, thus a correct answer depends on which perspective you see yourself on. Although moral thought experiment are flawed, I think there is a way to improve the experiments to include all aspects of the decision making process to were the experiments will ultimately have only one correct answer and can be implemented in the real world. I believe that a society can ultimately come to an agreement of most things that are to be deemed as good and bad. Once that standard is established, then we can examine the personal element of the decision processes. It would be unfair and unethical to force someone perform an act that compromise their beliefs and have long-term effects from doing such an action. We must develop a system that upholds the morality of the society and at the same time does not interfere with the persons’ own integrity. To do this, we would have to study and perform test were a boundary can be established where a person would no longer feel comfortable preforming the action. Obviously one person’s comfortability varies from one person to another, thus a large sample population must be studied to generate a mean boundary line that must not be crossed. There are many other ideas to improve the moral thought experiments but this is one suggestion.

    ReplyDelete
  6. Hi Justin, this is interesting, thanks. Just a couple of thoughts in response.

    1. Your view that the purpose of the 2nd scenario is to determine when self-interest overrides morality is worth thinking about. I don't think hardly anybody who teaches the trolley problem thinks about it in this way, but that doesn't mean they shouldn't or couldn't. Almost universally the data of the 2nd experiment is regarded as expressing a distinct moral judgment rather than a self-interested one.

    2. I think the 2nd scenario is poorly designed for the purpose you suggest. The reason, which I suggest in the post, is that considerations of self-interest and considerations of morality are not cleanly separated there. My moral emotions are being used both inferring both my moral judgment of this actions and my worries about my own future. So we really can’t draw any legitimate conclusions concerning the contribution of each to the judgment. If we really want to figure out the cut-off you suggest, then we could do something like make the subject him or herself into the fat man on the bridge and ask if s/he would be willing to jump off the bridge. Ultimately we'd probably need something more fine-grained than this but the principle would be the same.

    ReplyDelete
  7. After reading the blog post, I agree that mental thought experiments are flawed. The claim that you have of morality and self-interest both being important in the trolley car scenario makes sense. Morality says that we should cause as little harm as possible no matter what. However, when you factor in the self-interest into the second scenario with the fat man being the victim, the choice isn’t so clear cut. We can see the impact of morality and self interest in everyday life as well. For instance, morality says that we should help those less fortunate than us, like the homeless, but there will always be someone less fortunate than us. Self-interest says that I can’t help everyone I see constantly because I need my resources. I don’t feel that the trolley problem shows that humans make moral judgments that waiver but that our level of personal involvement, even in something that is purely a “what if” exercise, plays a much bigger role than morality. Morality becomes the thing that we should do while self-interest is what decides what we will do. Based on this, I don’t feel that we will ever develop moral thought experiments that aren’t flawed. No matter what the scenario that is put in front of us, they are presented as though its solely a moral decision on what should be done. If self-interest was built into mental thought experiments the perception would differ for each person asked about the scenario. The moral choice would be harder to make with all the different variables that could be thrown into the mix, especially for those that are learning philosophy. With the deontological approach and utilitarianism approach you can’t account for self-interest; it becomes impossible to account for every possibility.

    ReplyDelete
  8. I agree that moral thought experiments are flawed. The difference between pulling the switch and killing the least amount of people and pushing the fat man off the bridge produce the same outcome for society but they have different effects on the person performing the action. Mr. Mayes says that in this type of experiment we are sort of expected to disregard our moral emotions and that it is a future he wishes to avoid. Just like Bentham’s philosophy. For him, both scenarios would be equal since they both would result in the majority having the least pain by sacrificing one person to save the rest of the five people. I agree with this statement, In the switch problem it is better to pull the switch and let one person die instead of five. In this scenario we would be able to sleep at night more comfortable knowing we sacrificed one life to save five. It is the moral and even heroic thing to do. But, when we ought to push a fat person off the bridge to produce the same outcome, it feels different. It affects our moral emotions. Mr. Mayes highlights that in his case, he would not be able to sleep at night, having nightmares about what he has done, even though it would be the same outcome for society. Mr. Mayes says that we do not always have to sacrifice ourselves for the good of others and that can be proven true in these scenarios, where most of us wouldn’t kill the fat man because it would involve sacrificing our good night sleep and mental health. Personally, I think that people's responses are not accurate. People might say that they would pull the switch but not push the fat person off the bridge. However, out of all those people, how many of them would actually even pull the switch? Personally, I don’t know what I would do. Sometimes, we say we will do one thing, but in the moment, for some reason or another we can end up doing something completely different.

    ReplyDelete
  9. I agree with Mayes that these thought experiments are not flawless. He has made a good point when he attacks the purpose of thought experiments, which to bring out the conflict between self-interest and morality. I do believe that in trolley experiment, most people will choose to pull the switch to save lives if 5 people because they do not want to suffer the unkind view from society. In my opinion, self interest and morality always go together; it cannot go without the other. Like what Mayes discussed in his blog, you cannot sacrifice your own interest for other people's benefit. Therefore, whenver i make a decision, i have to consider whether both self interest and morality are salient.

    ReplyDelete