Jenny McCarthy is a celebrity in the United States and a
prominent anti-vaccine activist. She is the president of Generation Rescue, a
non-profit that advocates the view that autism is at least partially caused by vaccines,
and has written several books promoting this view. Since 2007, she’s been
featured on several media outlets where she’s been asked to defend her views on
the relationship between the Measles, Mumps, and Rubella (MMR) vaccine and
autism. In many of these interviews, it’s clear that those questioning McCarthy
are trying to hold her morally responsible for her view by demanding that she
justify her position—by holding her answerable. Despite these numerous
calls for McCarthy to justify herself, she hasn’t changed her view on vaccines
(Although in a 2010 interview with Frontline, McCarthy clarifies
the position of her group) In fact, calling on McCarthy to defend herself
in the public sphere arguably only serves to legitimize her views and expose
them to larger audiences. Though the CDC determined that measles was eliminated
in 2000, due in large part to an increase in the refusal to vaccinate, a record number of
measles cases were reported in 2014. If we think that McCarthy’s position
on vaccines is incorrect and her advocacy of the position is blameworthy, how
can we hold her responsible for her behavior without reinforcing the very
behavior we find blameworthy?
Cases like these pose a problem for philosophers who work on
moral responsibility. Following the work of T. M. Scanlon, many philosophers
argue that there is a relationship between moral responsibility and
answerability—the demand for justification. Of course, philosophers have argued
about how exactly responsibility and answerability relate to each other. But
both those who argue that moral responsibility should be identified with answerability
(Smith 2012) and those who argue that answerability only captures one facet of
moral responsibility (Shoemaker 2011) face a problem.
In many cases, when we attempt to hold
someone morally responsible for an action by demanding that they answer for
their behavior, the person, rather than see the error in their ways, can become
even more confident in their reasons for action and refuse to alter their behavior.
This can have quite damaging effects when the behavior in question is dangerous,
violent, or qualifies as a public health risk. Such cases place those who
defend the relationship between moral responsibility and answerability in a
precarious position. If the very means by which we hold people responsible for
blameworthy behavior only serves to worsen that blameworthy behavior, then it’s
hard to see why we should hold people morally responsible in the first place.
And, if the answerability account of moral responsibility can’t easily be
operationalized, then perhaps we should look for another theory of moral
responsibility. Though those who defend the answerability account have remained
relatively silent on how to successfully hold an agent answerable, the
behavioral sciences can help address this question. By developing an account of
answerability that is informed by this research, answerability theorists can
shield themselves from the worry that their view can never be successfully
operationalized.
The case of Jenny McCarthy is not an isolated incident. Objecting
to people’s beliefs is notoriously ineffective in changing those beliefs. Confirmation
bias (Lord et al. 1979)—the tendency to accept evidence that supports one’s
previously held beliefs and discount evidence that doesn’t—is a robust
phenomena that has been found in a wide variety of contexts. The backfire
effect is perhaps even more pernicious, indicating that when given evidence
against a belief, people will reject the evidence and hold the original belief
even more strongly (Nyhan & Reifler 2010). Asking people to give their
reasons for their beliefs is also unsuccessful when it comes to changing their
beliefs (Fernbach 2013). But if neither objecting to people’s views nor asking
them to provide their reasons causes them to see the error in their ways, how
are we to successfully hold people answerable? Is answerability a misguided
account of moral responsibility?
Those who defend an answerability account of moral
responsibility, whether they think answerability just is moral responsibility
or answerability captures only a facet of moral responsibility, remain vague
about how we can successfully hold people answerable. Angela Smith argues: “In
my view, to say that an agent is morally responsible for some thing is to say
that the agent is open, in principle, to demands for justification regarding
that thing” (Smith 2012, 578). But we can demand justification in many
different ways, and we can do so more or less successfully. Though asking an agent to respond to arguments
against her view or asking her to list her reasons are demands for
justification, they are largely ineffective when it comes to getting agents to
jettison morally problematic beliefs and curbing morally blameworthy behavior.
Are there more effective ways to demand justification from moral agents? This
is a question that the behavioral sciences can help illuminate.
One recent study indicates that asking people to explain
their beliefs and the policies they endorse is more effective at reigning in
extreme beliefs than asking people to respond to objections to their views or
listing their reasons for their beliefs (Fernbach 2013). In particular, getting
participants to explain the causal mechanisms at play in the political policies
they endorse undermines the illusion of deep understanding many participants
felt, which makes it more likely for participants to adopt less extreme policy
beliefs. Fernbach and his collaborators also found that the call for
explanation made it less likely for participants to donate money to
organizations that supported their previously held political positions. Not
only did the demand for explanation reign in extreme beliefs, it also played a
role in changing participants’ behavior.
Answerability theorists may be right that holding people
morally responsible should involve a demand for justification. But how we
demand justification matters when it comes to altering people’s morally
blameworthy beliefs and behavior. Thus, answerability theorists should focus on
developing operational views of answerability, which are informed by the
behavioral sciences.
Hannah Tierney
Department of Philosophy
The University of Arizona
Works Cited
Fernbach, P., T. Rogers, C. Fox, and S. Sloman. 2013.
Political extremism is supported by an illusion of understanding. Psychological Science 24:
939-946.
Lord, C., L. Ross, and M. Lepper. 1979. Biased assimilation
and attitude polarization: The
effects of prior theories on subsequently considered
evidence. Journal of Personality and Social Psychology 37: 2098-2109.
Nyhan, B. & Reifler, J. 2010. When corrections fail: The
persistence of political
misperception. Political Behavior 32: 303-330.
Scanlon, T. M. 2008. Moral
Dimensions: Permissibility, Meaning, Blame. Cambridge, MA: Belknap
Press of Harvard University Press.
Shoemaker, D. 2011. Attributability, answerability, and
accountability: Toward a wider
theory of moral responsibility. Ethics 121: 602-632.
Smith, A. 2012. Attributability, answerability, and
accountability: In defense of a unified
account. Ethics 122: 575-589.
Hannah,
ReplyDeleteThis is tragically timely for some of us.
Let's pretend that Mr. T was running for president. (You know, from The A Team? "I pity the fool" and all that? Any resemblance to actual candidates is purely coincidental.)
And Mr. T says articulates a claim, C, which is not just false, but demonstrably false.
The argument you are pressing here is that the most effective way to get Mr. T's view C quarantined, so that the general population does not believe C, is to pursue option (1) below instead of the other options:
(1) ask Mr. T to explain C.
(2) ask Mr. T to justify C by giving his reasons for believing C.
(3) ask Mr. T to answer the objection to C which demonstrates C is false.
(4) Ignore Mr. T.
While I have some hope about your argument being correct, I also have a worry about your argument.
What if Mr. T "explains" C by just giving a pair of further demonstrably false claims A and B? Now you've got two (or three, if you count the original) claims running around! It's like the ancient hydra: cut off one head, and more appear in its place.
One other quick question: it sounds like you are optimistic (based on the behavior research, of course) that asking for explanations can not only change the extremism of Mr. T believing C (so that now he only believes…c…lowercase…), but also for changing the probability of Mr. T's followers believing C. My question is this: which of these do you think to be the more important one?
This seems right to me, too, Russell. I've never seen the Fernbach paper (yet!), but they would have controlled for all sorts of incentives (e.g., to get attention, to signal strength and decisiveness to an audience, etc.) but people have all those things in the real world and they're amplified in the sort of cases you and Hannah highlight. So I guess that operationalizing answerability in such cases would look a lot different than cases involving more typical individual interactions.
DeleteIn defense of Hannah, maybe the way this works out in practice is that you ask someone to explain C, and then they make two other demonstrably false claims A and B (this happens in the classroom a lot) and then you ask them to explain A and B. At least for students this is the point where things break down and the student doesn't realize why she thinks we should have a right to open carry, or they can't say why socialism is bad. For Mr. T though, you might just go in circle.
ReplyDeleteSo here I'd like to say something in favor of the answer-ability folks: Perhaps they would say that their account is correct and that people ought to be able to give reasons for their view, but in fact many people are not epistemically virtuous and this is not the fault of the answerability account but a vice in people.
Thanks for posting Russell, Kyle, and Beth!
ReplyDeleteI think Beth is right—usually the thing to do in cases where people provide further false explanations for their false views is to ask for further explanation. It’s probably more common for people to provide false explanations for false views than to provide true explanations for false views, after all. Typically, asking for someone to justify their view doesn’t stop once they’ve provided a justification—it’s a process, or conversation, that can go on for several iterations.
Also, and more work would need to be done, but it could be the case that providing false explanations still undermines the illusion of deep understanding that is so pernicious. In the Fernbach studies, participants were providing inadequate explanations for their extreme views, yet they were less confident in those views after they were asked to explain them and they were less likely to behave in a way that supported those views weeks after the intervention.
To answer Russell’s second question, I think both getting people to become less confident in their morally blameworthy views and stopping the harm associated with these morally blameworthy views are important goals, but they’re important for different reasons. Getting someone to give up their morally blameworthy view is, at least partially, motivated by attempting to hold them morally responsible for their views. Stopping the spread of a morally blameworthy view is, at least partially, motivated by attempting to stop the harm associated with that view. So, depending on whether you care about holding people morally responsible or stopping the spread of a certain harm, you’ll value these goals differently. Unless of course you value holding people morally responsible and stopping harm equally, in which case you’ll think these goals are equally important.