Recent work in psychology and neuroscience has begun to show that emotion is not hostile to rational decision-making, but integral to it. One of the first to guess this correctly was the greatest philosopher of the modern period, David Hume. Hume, as you may know, argued that reason alone has no power to motivate, baiting his opposition with polemical pithiness like the now (in)famous:
Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.and
'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.Philosophers have since felled forests attempting to unpack the precise meaning of pronouncements like these, but the gist is clear enough. Hume advanced an empirical hypothesis, viz., that emotion (passion, desire) is what causes all human behavior. Hence, it is impossible for emotion and reason to be in opposition, and it is impossible to perform any action on the basis of reason alone.
Hume's hypothesis turns out to have been correct to some degree. For example, as a result of the work done by neuroscientists like Antonio Damasio we have learned that people who have certain forms of brain damage can reason well while having no ability to make rational decisions. This is roughly because the damage is to brain structures that process and transmit our emotions to those that produce reasoning. Afflicted people will reason expertly but not act, simply because they never get the emotional input that is required to pull the trigger.
On the other hand, Hume was wrong in thinking that emotions and reason can not come into conflict, and this is because he did not grasp that our emotions are themselves a means of conveying the output of unconscious inferential processes. (To be fair, such a view would have been received as incoherent at the time; the unconscious mind was, at best, a paradoxical plaything of poets, not natural philosophers.) According to what we now call dual process theories of cognition, human beings have two systems for making inferences and producing behavior. System 1 is responsible for the rapid, intuitive, effortless, massively parallel and mostly unconscious inferences that we need to survive in the natural and social world. Think, for example, about the amazing amount of information you immediately infer from a fleeting shadow or facial expression. System 2 is the laborious, conscious, serial and highly flexible process of inference we associate with calculation and conscious reasoning.
Our emotions, intuitions, hunches and gut feelings are primarily associated with the informational outputs of System 1. System 2, by contrast, is a capacity, perhaps unique to humans, to monitor the outputs of System 1 and to try to correct them when they go awry. System 1 and System 2 are both prone to error, but for different reasons. System 1 errs because rapidity is achieved by imprecise methods like association, stereotyping, bias and instinct. System 2 errs primarily because it requires sustained attention and effort to perform properly.
Without the information supplied by the rapid and typically trustworthy calculations of System 1, reason would be swamped with work it is incompetent to perform. (This, for example, is the predicament of autistics, who often have very high IQ's, but have extraordinary difficulty processing language.) It is true that, like our digestive tract, our emotions sometimes run amok and genuinely interfere with the normally smooth functioning of the mind. It is also true that System 1 does not know its own limitations. (Hell, it does not even know it exists!) But without it we would be lost.
What does all this mean for philosophical practice? Here are a few suggestions.
First, philosophers need to finally and fully reject the rationalist conceit that the best of us are people who draw conclusions and make decisions on the basis of reason alone. Vulcans are no more physically possible than philosophical zombies. All rational inference ultimately depends on emotional feedback.When the conclusions we reach through careful ratiocination don't feel right, we philosophers have a strong inclination to reject them, just like anyone else. For too long we have obscured this by calling the feelings philosophers appeal to 'intuitions' housed in a mythical rational region of the philosophical mind called the 'intellect.'
Second, we need to try to stop being frustrated (an emotional reaction :-) when people don't change their minds and their behaviors in response to arguments (ours, of course) they can not refute. This is just not the way the normal well-functioning mind works. People are as prone to being misled by delusive reasoning as they are to being blinded by the strength of their feelings and it is profoundly unwise to automatically privilege one over the other in any categorical sense.
Finally, we need to get comfortable with a vocabulary that explicitly grants emotional reports significance in the arena epistemic. My own view here is that we should learn to draw a clear distinction between our belief that a proposition is true and our feeling that it is true, and not just for the purpose of dismissing the latter from serious philosophical discussion. I would like to constrain the idea of believing that something is true in such a way that it indicates a System 2 inclination to assent to a proposition on the basis of careful consideration of explicitly formulated evidence. By contrast I would constrain the idea of feeling that something is true to an inclination of System 1 to assent to a proposition on the basis of inferential processes and information, much of which may not be consciously available.
As I see it, one of the primary benefits of this way of speaking is that it reduces our incentive to misdescribe and rationalize our feelings as evidentially based beliefs, just to get them taken seriously by others. If we respect feelings of truth and falsity from the beginning, then we can conduct more constructive inquiries aimed at feeling what we believe and believing what we feel.
Perhaps surprisingly, our traditional conception of the rational agent does not suffer greatly from giving emotion its due. Recognizing that our feelings can be important indicators of a truth that has eluded our reasoning does not in any way give feelings a veto when a conflict arises. When we just don't feel right about a conclusion produced by reason, the proper response is more reason, not more feeling. Sometimes System 2 will discover that System 1 was indeed picking up on evidence lurking under the surface. But other times, especially as scientific knowledge of the evolutionary and neurological basis of System 1 grows, we will discover that the the feeling is an illusion resulting from one of its intrinsic blindspots. In such cases, no matter how strong the feeling, rational inquiry must prevail.
G. Randolph Mayes
Professor
Department of Philosophy
Sacramento State
Randy,
ReplyDeleteThank you for your post; interesting, well considered, well written and provocative. I enjoyed reading it and having it in the back of my head since I read it.
You evoke Mr. Spock which I’ll take as an invitation :) Do you remember the episode The Galileo Seven? Spock is in charge of a shuttle craft and crew which are stuck on a planet with giant hairy beasts that keep throwing boulders and spears at them. The crew drains their fazers to fill their tank to get enough power to leave the planet but don’t have enough juice to escape the planet's gravity or even to achieve a stable orbit. Communications are scrambled and the Galileo has no way to call the Enterprise for help.
Spock chooses to dump and ignite all the remaining fuel from the shuttle's engines, which shortens the time the shuttle can survive but in effect produces a giant flare which is spotted by the Enterprise. Kirk speedily turns the ship around and beams the Galileo crew to the Enterprise just in the nick of time, mere nanoseconds before the shuttle burns up in the atmosphere (exciting!)
On board the Enterprise, Kirk tries to get Spock to admit that he burned up the fuel as the result of a desperate, emotional outburst. Spock stoically points out that such desperate measures are the most logical action when reason has run its course.
Do you agree with Mr. Spock? Can reason run its course?
I’m curious about how you see the systems approach might influence applied ethics. You write “System 2 errs primarily because it requires sustained attention and effort to perform properly.” Might the error at times be more severe than requiring sustained attention and effort? Might such sustained attention operationally be understood as paralysis when what ethics demands is action?
You say in your final paragraph “our feelings can be important indicators of a truth that has eluded our reasoning” but this “does not in any way give feelings a veto when a conflict arises.” Does it make sense to say that sometimes the conflict is resolved simply because reason has run its course and all that is left is feeling?
“If we are to be always deliberating, we shall have to go on to infinity.” Aristotle
Randy, thanks for the kind words and interesting questions.
ReplyDeleteI do remember that episode, though it was only the final conversation that I recalled vividly. (I've put a link to it below for anyone who hasn't watched it.) I remember thinking at the time- and after watching it again I still think - that there was no need for Spock to assent to the claim that it was an act of desperation in any irreducibly emotional sense. His maneuver had a low probability of success, but it had the highest probability of any of the available alternatives, so it was really a perfectly logical act as I see it.
So in that example I would say reason had run its course by settling on the right course of action. Still, there are other situations in which the truly rational thing to do is choose randomly. I'm thinking here of the Paradox of Choice, where we are presented with 20 different kinds of equally good for our purposes digital cameras and we are tricked into spending an enormous amount of time comparing their features and ultimately deciding on the basis of features we will never use. That is time irrationality spent and we really should just go with our feelings. It's fun to make decisions that way, and in this case more rational because it saves time.
I think you are definitely right that there are times when we can reason past the point that action will do us any good. But I'm reluctant to sign on to the general claim that ethics demands action. Often ethics demands inaction, or the action that is inaction if you want to put it that way. This is a very potent problem, e.g., in the practice of medicine today. There are just an enormous number of cases in which the responsible thing for the doctor to do is absolutely nothing, where any action is likely to do as much harm as good to the patient. (Gilbert Welch from the Dartmouth Institute says that one of the most important question you can ever ask your doctor is: What happens if we do nothing?) Inaction takes a great deal of moral strength, especially when patients and there families are screaming at you to do something for which you will be well compensated. That, I think, is why "Don't just do something, stand there," is starting to be acknowledged as an important principle of leadership.
I think I agree with you that it is possible to have situations in which "reason has run its course and all that is left is feeling." But I think these situations are probably pretty rare and very easily confused with situations in which reason is telling us one thing and feeling is telling us another, and we just privilege feeling. For example, there may be very clear reasons why it is foolish to purse a romantic relationship with a person to whom we are strongly attracted. Here we might say that reason has run its course because it has failed to change the way I feel, and letting feelings rule the day usually ends disastrously.
http://www.youtube.com/watch?v=DMPib-WB4Ek
Randy,
ReplyDeleteThat was a fun clip. Boy, those folks on the Enterprise sure know how to laugh.
Thank you for your thoughtful reply. I could not agree with you more that sometimes the best action is no action at all. Developing good judgment is the cornerstone of the type of ethics that resonates with me most. I also agree with your take on the Paradox of Choice. It can be helpful to differentiate between freedom of choice, per se, and the functional task of choices in our lives. The important issue does not concern the number of choices we have but whether the choices nurture or deny us, make us more mobile or fence us in, increase self-respect or weaken it. In short, do the choices available to us better our lives? Not all choices enhance freedom. In fact, some impair freedom by taking time and energy we would be better off devoting to other matters. Having 6000 TV channels to choose from is not necessarily a good thing. I, for one, have a limited supply of self-discipline and I dislike using it up making sure I don’t turn on the TV and getting stuck watching reruns of Leave it to Beaver (no disrespect to the Beav). It seems to me that even though choice has obvious value in that it helps us get what we want, an over-abundance of choices can have, paradoxically, the opposite effect. Too many choices become time-consuming, burdensome and perhaps most disconcerting of all, distracting.
Randy, well put. I wonder, though, if you think we might express the same thought by saying that more choices does mean more freedom, but more freedom isn't always a good thing. Freedom is only valuable to the extent that it allows us to better our lives, and complete freedom to choose between alternatives of equal value isn't obviously doing that. I think I prefer that way of speaking because I think we want to preserve the idea that the exercise of freedom can produce both harm and benefit. But I think you're right that there is some sense in which being offered too many choices is a trap from which it is difficult to escape. So there is a kind of local/global freedom distinction here that is useful and interesting to think about.
ReplyDeleteYes, it is an important distinction you are making. The word I didn't use is autonomy. I'd like to phrase it as something such as more freedom (of choice) is not necessarily the same as more autonomy. I need to give it more thought.
ReplyDeleteAnyone else care to chime in on this? Thoughts?
The action/inaction thing reminds me of something I heard the feminist author Letty Cottin Pogrebin say in a speech decades ago, and for some reason it has stayed with me all these years. She was recounting a time in college when she broke up with a boyfriend who then made a suicide attempt. She remembered rushing over to the student health center wondering desperately what she should do--should she get back together with him until he was less fragile, should she visit him or would that just make things worse, what should she say to him, etc. etc.?
ReplyDeleteShe stopped by to see the psychiatrist at the center and poured all this out and asked him what he thought she should do.
He answered (I remember her repeating this very, very slowly): "You don't have to do anything, right now."
Don't know why that resonates so.
Off to watch Star Trek on MeTV....
Shelley, nice observation, thanks. On a similar note, I've developed a mantra for stressful situations that I speak to myself with similar deliberateness: Don't...make...this...worse. Kind of a Primum, non nocere for the self. I could have used it when I was younger.
ReplyDelete