Sunday, February 26, 2017

The trouble with moral thought experiments

In last week's post Garret Merriam argued that the famous brain-in-a-vat thought experiment is incoherent.  In this post I argue that many popular moral thought experiments are flawed as well. I won't argue that they are incoherent; rather, I claim that they tend to presume and promote a flawed understanding of human decision-making.

So first a few words about that:

Human beings are social animals. We have learned to cooperate with one another in order to acquire goods that we can not easily secure in isolation.  In every human society adults are expected to do two things: (1) manage their personal affairs, and (2) respect the rules that make the benefits of cooperation possible.

On any given day we make thousands of almost entirely self-interested decisions.  Most are trivial, such as which word I should use to finish this taco. Some are more significant, such as whether to head for the beach or the mountains on Sunday. In each case I am just doing my best to figure out which of two options will deliver the greatest personal utility. (I am not saying that these actions have no moral significance, only that we do not typically make moral considerations when deciding whether to perform them.) We also, though less commonly, make decisions that are almost entirely moral in nature.  For example, I may be completely committed to helping you move to a new apartment, deliberating only on how I can be of greatest assistance.

But the more interesting decisions occur when both types of considerations are salient. The magic of well-organized societies is that they tend to support the same conclusion. When my alarm rings in the morning I haul my butt out of bed and drive to work. This is because it would be bad for me not to and wrong of me as well. Sometimes, however, these considerations support different decisions.  It might be morally better to help you on move day; still, it is shaping up to be beautiful outside and I would much rather go for a hike. In situations like this I have to decide whether to do what is right, or to do what I like.

When doing moral philosophy we sometimes wrongly suppose that whenever considerations of morality and self-interest come into conflict, we ought to do the morally right thing.  But this is incorrect. Of course it is tautological that morally we ought to, but we are not always expected to sacrifice our own interests for the benefit of others. Rather, when decisions like this arise, we weigh what we ought to do morally against what we ought to do prudentially, and make the best decision we can. This is easier said than done, especially since these two types of value are not obviously fungible. But it is our task, nonetheless.

Now for the problem with moral thought experiments:

Most moral thought experiments are intended to bring out a conflict between different ways of thinking about morality, typically between a utilitarian and a deontological approach. In the  trolley problem, e.g., it is first established that most people judge that one ought to pull a switch that would divert a runaway trolley so that it kills the fewest possible people.  Later we see that most of us also judge that one ought not to push a fat man off a bridge to precisely the same effect. Some philosophers argue that this shows that we are prone to making inconsistent moral judgments. Others claim that we must be detecting morally relevant differences between the two cases.

I don't think either of these conclusions is warranted. This experiment, and others like it, are flawed.

The flaw is that the hypothetical situation described in thought experiments like these are presented as if they constitute a purely moral decision. As noted above, these do occur in everyday life, but scenarios like the trolley problem don't approximate them.  Rather, they provide a decision in which considerations of self-interest and morality are both salient.

This is easily seen in the trolley problem. In each case there is a non trivial question concerning what is best for society as well as what is best for me. In the switch-pulling version, considerations of morality and self-interest more or less coincide. I calculate that pulling the switch is the best outcome for society and also the result I can live with personally. In the fat man version, these considerations collide. Sure, pushing the man off the bridge will save lives. But in the future I suspect I will suffer nightmares too intense to bear.

Some may respond impatiently: This is just the familiar sophomoric complaint that the thought experiment is unrealistic. All thought experiments are unrealistic, that is why they are thought experiments rather than real ones. Philosophers know that considerations of self interest play a role in real life, but we ask that you do your best to bracket these considerations in an effort to develop a clearer understanding of morality.

That is not good enough.

Just how are we supposed to bracket considerations of self-interest in this case? Are we asked to disregard our moral emotions altogether?  It is these, after all, that predict a future I wish to avoid. But to do that is to squelch one of our main sources of moral evidence as well. Alternatively, should we allow ourselves to pay attention to the moral emotions, but only for the purpose of moral judgment, taking care (a) not to let considerations of self-interest infect these judgments, and (b) not to confuse the best decision with the morally correct one?

Wow. I have never heard the trolley problem presented like that. It is is not at all clear that we have this ability. But if it could somehow be trained up, I'm betting we would end up with a very different data set.

G. Randolph Mayes
Sacramento State
Department of Philosophy

Sunday, February 19, 2017

How to build a brain in a vat

Philosophers love thought experiments. They're fun, memorable, engaging tools for getting us to think about perplexing intellectual or moral problems. When engineered well, thought experiments can shed light on obscure concepts, raise challenging questions for dominant modes of thought, or guide us to recognize a conflict between two deeply held intuitions. When engineered poorly, however, they can instill a false sense of understanding or create needless confusion in the guise of profundity. Sadly, many of the most famous thought experiments in philosophy are engineered poorly.

Consider one of the most famous modern examples of this problem, Gilbert Harman's "Brain-in-a-Vat" thought experiment.[1] A descendant of Rene Descartes' "evil demon" hypothesis[2], this thought experiment is designed to motivate general skepticism about sense perception and the external world. What if, we are asked to imagine, you're not really here right now, but instead you are just a disembodied brain, suspended in fluid, with a complex computer stimulating your brain in all the right places to artificially create the experiences you take yourself to be having. For example, the computer could send signals to your visual cortex making you think you’re looking at a blog post on The Dance of Reason, when in fact you’re looking at no such thing, because you have no eyes. Hypothetically, the thought experiment says, there would be no way to tell the difference between a reality where your brain is directly stimulated in this way, and one where you actually have a body that interacts with the world at large. Given this indistinguishability, how can we ever really rely on our senses? How can we ever have any kind of empirical knowledge at all?



Many late-night hours have been spent trying to answer this skeptical riddle. As an intellectual puzzle, an amusing game to getting us thinking, or to kick start a conversation in an intro to philosophy course, it works just fine. But as a tool for trying to understand how humans know the world, it is deeply misleading.

The problem, in short, is that neurologically speaking conscious experience simply does not work the way this thought experiment presumes it does. The brain is a necessary, but not sufficient condition for having experiences. This is not because, as Descartes argued, we have some nonphysical aspect to our mental lives, but rather because a disembodied brain is physiologically incapable of producing the panoply of experiences that we all have every day.

Consider, for example, emotions. While the processing of emotions takes place in the brain the key ingredients that make up the neurocorrelates of emotions—hormones and neurotransmitters—are created by the endocrine system, the network of glands distributed throughout the body.[3] Without these glands you would never feel love, anger, sorrow, joy, lust, hunger or disgust. The absence of these feelings would be a dead giveaway that you were a disembodied brain in a vat.[4]

But it doesn’t stop there. In addition to an endocrine system, you would need circulatory and lymphatic systems to transport the hormones from the glands to the (very specific!) parts of the brain where they are needed in order to give rise to specific emotions. You would also need a digestive system to get the chemical precursors that fuel the endocrine system, while your integumentary system (skin, hair) is essential for flushing byproducts the other systems can’t use. Lastly, all those organs need to be supported by something, making a skeletal system indispensible as well.

In short, the only way to build a brain in a vat is to make the vat out of a human body.

I suspect two objections are occurring in your brain right now. First off, how do I know we need these systems to feel emotions? What if I only think that because the evil genius programming the computer controlling my brain has led me to believe this in the first place? Haven’t I failed to take the force of the skeptical argument seriously?

Okay, I reply, but how do we know we even need a brain in the first place? Why doesn’t the thought-experiment work if it’s just a vat and a computer? For that matter, how do we know there are such things as vats or computers or evil geniuses at all? In order to be expressible in language the thought experiment has to be grounded in something, some kind of experience that explains how our experiences might be systematically misled. If the skeptic can help themselves to a host of experience-based ideas to fund their thought experiment it seems disingenuous of them to object when I do the same to defund it.

The second objection charges me with taking the thought experiment too literally. The point of the thought experiment was to explore epistemology and the limits of our sense perception, not the neuroanatomical foundations of our emotions. We can acknowledge the facts about the physiological basis for hormones and still benefit from pondering fantastic hypotheticals such as these.

This objection precisely illustrates the problem with thought experiments I mentioned in the first paragraph. Epistemology is not bounded by the limits of our imaginations alone. Human beings come to know things by using our brains and bodies, and the empirical realities of those brains and bodies places constraints on what knowledge can be, how it can work, and how we can attain it. When we abstract away from real flesh-and-neuron human beings we are left with nothing human in our epistemology. Whatever is leftover has little bearing on anything worth caring about.

Thought experiments that are accountable only to our imaginations are unlikely to provide us with insight into complex topics like the true nature of minds, morality or metaphysics. As Daniel Dennett says, “The utility of a thought experiment is inversely proportional to the size of its departures from reality.”[5] If we want to contemplate skepticism and the limits of sense perception, there are plenty of ways to engineer realistic thought experiments based on the real-world limitations of the human brain.

Garret Merriam
Department of Philosophy
University of Southern Indiana


[1] Harman, Gilbert (1973). Thought, p5. Princeton University Press 

[2] Descartes, René, (1641), The Meditations Concerning First Philosophy, (John Veitch trans., The Online Library of Liberty 1901 Meditation II, paragraph 2.

[3] Ironically, the endocrine system includes the pineal gland, which Rene Descartes speculated was the point of contact between our immaterial minds and our material brains. Rather than serving as a magic intermediary between two metaphysical planes, the pineal gland is part of what grounds the brain squarely within the body itself.

[4] It is only fair to mention that three parts of the endocrine system—the hypothalamus, the pituitary gland, and the pineal gland—are technically housed inside the brain. The supporter of the Brain-in-a-Vat argument could perhaps lay fair claim to these, as they would be included in the terms of the original thought experiment. None the less, the other parts of the endocrine system (including the thyroid, the adrenal glands, the gonads, and other glands) are distributed throughout the body placing them well out of play for the original thought experiment.

[5] Dennett, Daniel C. (2014), Intuition Pumps and Other Tools for Thinking, p.183, Norton, W.W. & Company, Inc.

Friday, February 10, 2017

The other “One Percent”

Let us pause and reflect on the following: those who hold PhD degrees are the Warren Buffetts of epistemic resources. They have been privileged with more educational experience and access to intellectual activities than 99% percent of living humans. Consider that simply having been awarded a bachelor’s degree puts one in the top 30% of educated persons in the United States, a Master’s degree will put one in the top 7% and a PhD degree the top 1%. Worldwide, the statistics are much more striking.[1] Although there is plenty of criticism to direct at higher education, it is hard to argue against the following: those who hold college degrees have had an experience of great epistemic value that others have not. Notwithstanding, it is rarely, if ever, suggested that PhD’s ought to share this intellectual wealth.[2] But why not?

Given the importance of epistemic resources to a life well-lived, it seems a bit odd that epistemic generosity is not morally expected, especially of those who are of noticeable intellectual wealth.[3] In various ways epistemic resources are as valuable as financial resources. So why wouldn’t epistemic 1%ers have as much of an obligation to share their epistemic wealth as the financial 1%ers have to share their monetary wealth? This post argues that epistemic 1%ers do have this moral responsibility and that those who fail to share their unique type of wealth are in fact failing to do what they ought. This moral “oversight” can be understood as a vicious character trait, i.e., many of the intellectually wealthy are epistemically greedy.

I will use the term “epistemic greed” as follows. Epistemic greed is greed for epistemic resources. “Epistemic resources” should be understood broadly. Examples include, physical goods, epistemic services, cognitive states and intellectual abilities that are specially related to knowledge, understanding, rationality, etc. Those who are epistemically greedy keep, take, acquire, or stockpile epistemic goods which they might otherwise share with the epistemically less advantaged. Here is a first shot at defining epistemic greed:
Epistemic Greed (EG): To hoard, acquire, or use an excessive amount of epistemic resources with insufficient concern for those who less epistemically advantaged
While the above definition is on the right track, I think too much is left vague by the expression “excessive.” Let us try a definition with more specificity:
Epistemic Greed (EG): Sharing comparatively little of one’s total epistemic resources with those who are less epistemically privileged than oneself.
In line with Aristotle’s notion of generosity, this second definition places a higher moral obligation on those who are epistemically wealthy. Let us helpfully recall that Aristotle argued the followin:
 “[I]n speaking of generosity we refer to what accords with one’s means. For what is generous does not depend on the quantity of what is given, but on the state [of character] of the giver, and the generous state gives in accord with one’s means. Hence one who gives less than another may still be more generous, if he has less to give”(2014;51). 
This Aristotelian understanding seems to fit with our everyday, pre-theoretical, understanding of the “non-epistemic” concept of greed. We expect, for example, those who are rich to give more than those who are not rich. ”[4] And just as monetary greed influences the egalitarian (or lack thereof) make-up of society, so does intellectual greed have an effect on the societal distribution of epistemic goods. If this much is correct, then the paucity of discussion on epistemic greed is a noteworthy philosophical oversight.

For too long moral and political discussions have focused primarily on economic inequalities at the expense of ignoring other types of morally weighty inequalities. One reason for this oversight might be another oversight: we have overlooked that just as an improvement in one’s economic means makes it easier to acquire epistemic resources, the converse is true as well: bettering one’s epistemic position makes it easier to improve one’s economic position. Intelligence can help one get a job, get accepted into college, and in various other ways provide means to a more satisfying life. Educational accomplishments, especially degree accomplishment, are closely tied to lifelong income prospects. In such respects financial and epistemic resources are importantly similar. Both are effective means to a variety of ends helpful in achieving life goals. [5]  Not all goods are of this kind. While I may very much enjoy my leather couch, it cannot help me achieve my dream life of an enjoyable career and basic level of material comfort. Epistemic and financial goods, however, can indeed help me in this regard. Money and knowledge are general purpose tools for a variety of life goals.

Discussing these ideas with academic friends and colleagues, I have heard many object that those with lower educational levels or poor analytic skills have little desire for epistemic goods. “I see your point,” they would protest, “But no one wants what we (academics) have to share.” To me such assertions suggest a disconnect between epistemic elites and their less privileged counterparts. Academics seem prone to mistaken assumptions about those who are epistemically underprivileged. While it may be true that many “ordinary people” dislike college classes and love The Kardashians, I would surmise that even Kardashian fans have some areas of epistemic interest in which some academics could be of help. Yes, often these epistemic interests are pragmatic. Hence helping the disadvantaged might require the epistemic 1%ers to step out of their comfort zone. While many people (university professors, for instance) are capable of helping persons improve their resumes and learn basic computer skills, few are familiar with this type of tutoring. This is no excuse, however, because it is quite easy to become so familiar. Learning what the epistemically disadvantaged desire and how to help requires dedication and open-mindedness, but not much more. Hence the decision not to share is inexcusable. It is simply a socially accepted form of greediness. Society should accept this vice no longer.

Maura Priest
The Humanities Institute
University of Connecticut, Storrs


References


A., & Reeve, C. (2014). Nicomachean ethics. Indianapolis: Hackett Publishing Company.

Bailey, M. J., & Dynarski, S. M. (2011). Gains and gaps: Changing inequality in US college entry and completion (No. w17633). National Bureau of Economic

Belley, P., & Lochner, L. (2007). The changing role of family income and ability in determining educational achievement (No. w13527). National Bureau of Economic Research.

Data Sources: Key Takeaways from the 2014 Survey of Earned Doctorates | Council of Graduate Schools. (n.d.). Retrieved from http://cgsnet.org/data-sources-key-takeaways-2014-survey-earned-doctorates-0

Mayer, S. E. (2002). The influence of parental income on children's outcomes. Wellington,, New Zealand: Knowledge Management Group, Ministry of Social Development.

Footnotes

[1] See https://nces.ed.gov/programs/digest/d14/tables/dt14_104.20.asp, and https://www.census.gov/content/dam/Census/library/publications/2016/demo/p20-578.pdf, and http://cgsnet.org/data-sources-key-takeaways-2014-survey-earned-doctorates-0. Note that often the statistics are shown in terms of age-group.

[2] I will use the terms “epistemic” and “intellectual” interchangeably. While there are contexts in which this use would be inappropriate, this paper is not one of those.

[3] Long-ago when Aristotle discussed the virtue opposite greed (generosity) according to his specific virtue-theoretic framework, he had in mind a notion specifically associated with the giving of financial resources. Nonetheless, Aristotle’s opinion should not always be understood as the final word on virtue.

[4] One critical difference between the points I make in this post and many common discussions of distributive inequality is that I am not solely focused on governmental obligations and solutions. My focus, rather, is on the character of individual epistemic agents and how they ought to treat other epistemic agents. That said, this paper in no ways rules out either the possibility that the government might be obligated to rectify epistemic inequalities nor that it simply might be prudent to use the government for egalitarian ends.

[5] While there has long been a connection between wealth and education, recent empirical studies suggest that the last few decades have seen this correlation get much stronger. For a few studies on this increasing divide and more generals research into income and education see Belley, P., & Lochner, L. (2007), Bailey, M. J., & Dynarski, S. M. (2011), and Mayer, S. E. (2002).

Sunday, February 5, 2017

The Washington Paradox

The absurdly great musical Hamilton includes the following line from President Washington’s farewell address:
Though, in reviewing the incidents of my administration, I am unconscious of intentional error, I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.
This seems like an admirably humble thing to say, but one of the philosophically interesting things about it is that it also seems like a reasonable thing to say. That is, Washington does not seem to be describing an unreasonable or irrational attitude about his decisions as president. It is often the case that when examining our actions or beliefs, no one of them seems to be a mistake, and yet we know that we are fallible beings who have likely made at least some mistakes.

The trouble is that certain ways of expressing this general idea lead to puzzling conclusions. Suppose Washington had said something slightly different:
Having carefully reviewed each decision I made as President, I believe of each one that it was not a mistake. Nevertheless, I know that I am not perfect, and so I believe that I must have made some mistakes as President.
This also seems like a reasonable thing to say. Having evaluated all of the consequences, obligations, and whatever other relevant factors, Washington might reasonably believe, for example, that appointing Jefferson as Secretary of State was not a mistake. He might then do the same for each other decision that he made until, for each decision he made, he reasonably believed that it was not a mistake. To see the puzzle more clearly, let’s assign a name to each of Washington’s decisions. We’ll call the first decision ‘D1’, the second ‘D2’, and so on. So, we can represent Washington’s beliefs about his decisions like this:
D1 was not a mistake.
D2 was not a mistake.
D3 was not a mistake.

Dn was not a mistake.
Given that Washington’s careful examination of each decision has left him with good reasons to think that it was not a mistake, it seems reasonable for him to believe each proposition on the list. However, it also seems reasonable for Washington, aware of his own imperfections, to believe that some of D1-Dn were mistakes.

But these beliefs cannot all be true. If the beliefs on the list are all true, then none of D1-Dn were mistakes, and so the belief that some of them were mistakes is false. On the other hand, if some of D1-Dn really were mistakes, then some of the beliefs on the list must be false. More than that, with a little reflection, it should be obvious to Washington that these beliefs cannot all be true, and as a result it does not seem reasonable for Washington to believe all of them. So, now we have a puzzle, a version of the Preface Paradox. Each of Washington’s beliefs seems reasonable, and yet it seems unreasonable to hold all of them together.

And Washington is not alone here. You’re very likely in the same boat. Consider all of your beliefs about some topic—Biology, for example. Supposing you’re a good epistemic agent, each of those is a belief in a proposition that you have carefully considered the evidence for and concluded is true. So, each of those beliefs is reasonable. However, you know that you are imperfect. Sometimes, even after careful consideration, you misread the evidence and accidentally believe something false. So, you have good reason to believe that at least one of your many beliefs about Biology is false. And now you have obviously inconsistent beliefs, all of which seem reasonable. So, what should you do?

I think that you and Washington should keep all of your beliefs, even though you know that they are inconsistent. The trick is to explain why it is reasonable to maintain these particular inconsistent beliefs, even though it is generally unreasonable to have inconsistent beliefs. If I have just checked the color of a dozen swans, for example, and come to believe of each one that it is white, it would be unreasonable for me to believe that some of them were not white. So, what is it about Washington’s situation that makes it different from this swan case?

One interesting difference is that it is reasonable for me to think that if one of the swans had not been white, I would have some sign or evidence of that—if some of them were black, for example, I would have noticed. Washington, on the other hand, not only has good reason to think that he has made some mistakes, but also has good reason to think that he might not have noticed some mistakes in his evaluation of hundreds of complex decisions. But this fact does not seem to prevent him from believing that he would have noticed if, for example, Jefferson’s appointment had been a mistake. He might think, for example:
If appointing Jefferson had been a mistake, he would have been a poor Secretary of State, which is something I would notice. So, if it were a mistake, I would have noticed.
Given his careful inspection of all of his evidence about each decision, Washington could give a similar good reason for believing of each decision that he would have noticed if it were a mistake. In fact, the point of carefully inspecting the evidence about each decision seems to be that, in doing so, Washington would notice if it were a mistake.

So, even though, for any decision we pick, Washington has good reason to think he would have noticed if it were a mistake, he still has a good reason to think that he might not have noticed if some of his decisions were mistakes. Perhaps this is what makes it reasonable for him to believe that each particular decision was not a mistake while still believing that some of them were mistakes.

Brandon Carey
Department of Philosophy
Sacramento State