Sunday, February 25, 2018

Analogies between Ethics and Epistemology


It’s increasingly common for epistemologists (both formal and traditional) to explore analogies between epistemic justification (rationality, warrant, etc.) and moral rightness.[1] These analogies highlight the normative character of epistemology; they’re also fun to think about.
          
This post is about a commonly discussed analogy between reliabilism about justification and rule consequentialism. I’ve started to think that reliabilists have good reason to reject this analogy . But I’m not sure how they should go about doing this. Let me explain.

Begin with reliabilism:

S’s belief that p is justified iff S’s belief that p is the output of a reliable belief-forming process.
A belief-forming process is reliable iff its immediate outputs tend—when employed in a suitable range of circumstances—to yield a balance of true to false belief that is greater than some threshold, T.

Compare this with satisficing hedonistic rule consequentialism:

S’s a-ing is right iff S’s a-ing conforms to a justified set of rules.
A set of rules is justified iff its internalization by most people would produce a balance of pleasure to pain that is greater than some threshold, T’.

The similarity in structure between these two theories speaks for itself. Further, it’s standard to assume that reliabilists endorse veritism, the claim that having true beliefs and not having false ones is the fundamental goal in epistemology. Reliabilism, then, might be said to be an instance of satisficing veritistic process consequentialism.

The starting point for many discussions of consequentialism about justification is a simple counterexample to a naïve consequentialist theory (e.g., Firth, Fumerton, Berker, among others). 
According to the naïve theory:

A belief is justified if, of the available options, it leads one to have the highest ratio of true to false beliefs.

Here’s the counterexample (originally inspired by Firth, 1981): [2]

I am an atheist seeking a research grant from a religious organization. The organization gives grants only to believers. I am a very bad liar. The only way for me to convince anyone that I believe in God is to form the belief that God exists. If I receive the grant, I will form many new true beliefs and revise many false ones. Lucky for me, I have a belief-pill. I take it and thereby form the belief that God exists.[3]

According to the naïve theory, my belief is justified. But my belief is obviously not justified. So much the worse for the naïve theory.

This brings me to my interest—or puzzlement—with the reliabilism/consequentialism analogy. It’s clear that reliabilism renders the intuitively right result in the grant-seeking case, namely,  the result that my belief is not justified. The belief-forming process that generated my belief in God—popping belief-pills—is not a reliable one, and for that reason my belief is not justified. So far, so good. But how can the reliabilist, qua veritistic consequentialist, say this? As far as I can tell, this question hasn’t really been discussed. And that seems strange to me. If reliabilists really are veritistic consequentialists, then shouldn’t they give my belief in God high marks?[4] And given the analogy—one that treats “justified” as analogous to “morally right”—wouldn’t this amount to saying the belief is justified?

One might think that in asking this I’m ignoring an important feature of reliabilism and rule consequentialism, namely, the fact that they are instances of indirect consequentialism. Indirect consequentialists aren’t interested in directly assessing the consequences of individual actions or beliefs, the response goes. Rather, they assess actions, beliefs, etc. indirectly, by reference to the overall consequences of the rules, processes, etc. that generate them.

This point is well-taken. But the problem persists. Satisficing hedonistic rule consequentialism loses its appeal as a consequentialist theory if it doesn’t at least sometimes allow us to break certain general moral rules when complying with them is disastrous (viz. Brandt 1992 87–8, 150–1, 156–7). Similarly for reliabilism, qua an instance of veritistic consequentialism, right? If the view doesn’t sometimes endorse jumping at an opportunity like the one presented in the grant case, it’s hard to see how it’s really committed to the idea that having true beliefs and not having false ones is the fundamental goal in epistemology.

So, I suspect the following: [5] if reliabilists are veritistic consequentialists, they must say something awkward about the grant-seeking case (or at least some case like it—maybe the demon possibility I mention in fn. 4). And I don’t think reliabilists should identify my belief in God as justified. Rather, I think they should push back on the reliabilist/consequentialist analogy itself. More specifically, they should deny—or maybe give a sophisticated reinterpretation of—at least one of the following:

      1. Epistemic justification is analogous to moral rightness
      2. Having true beliefs and not having false ones is the fundamental goal in epistemology
      3. If 1. and 2., then reliabilism is the epistemic analogue of satisficing hedonistic rule consequentialism
      4. If 3., then reliabilists have to say something awkward about the grant-seeking case (or some case like it).
And this is where I’m stuck. 1-4 seem quite reasonable to me. Thoughts?


Clinton Castro
Philosophy Department
UW-Madison




[1] I’ve contributed to this trend myself, here (see especially section 4).
[2] This case is different from Firth’s; it is closer to Fumerton’s formulation.
[3] Berker thinks these cases can be generalized: “all interesting forms of epistemic consequentialism condone […] the epistemic analogue of cutting up one innocent person in order to use her organs to save the lives of five people. The difficult part is figuring out exactly what the epistemic analogue of cutting up the one to save the five consists in.”
[4] We can play with some details and make it epistemically disastrous to not take the pill—suppose that if I don’t get the grant the philosophy department will sic a Cartesian demon on me.
[5] I don’t think I’ve made an iron-clad case here!



5 comments:

  1. Thanks for doing this, Clinton. How about suggesting some kind of internalist constraint on justification for reliabilists, making them less consequentialist? So: S’s belief that p is justified iff S’s belief that p is the output of a reliable belief-forming process; and, S didn't violate any common-sense standards of epistemic responsibility in coming to believe that p. Maybe it helps with your puzzle in a way that preserves a useful analogy with ethics, and there may be independent reason to go for such a view.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. This is an interesting suggestion, Kyle. Reliabilists will want to account for these constraints in reliabilist fashion, of course. Lucky for them, there is a way to do this.

      There's a version of reliabilism known as "approved-list reliabilism". The idea behind the view is that that there are two stages involved in making evaluations about justification. At the first stage, agents judge belief-forming processes as (un)reliable in the actual world and construct a mental list of reliable (or "approved") processes. At the second stage, agents defer to this list when judging a belief's justificatory status. This view, presumably, renders the right result in the grant-seeking case: popping belief-pills isn't on anyone's list of approved processes. This view has an internalist flavor to it, as the lists are based on our judgements and observations. It also renders judgements that square with common-sense; on this view we will only approve of processes that we encounter in the actual world and take to be reliable. It's for these reasons, in fact, that approved-list reliabilism deals nicely with many problem cases, such as Norman the clairvoyant and the new evil demon (details on how approved-list reliabilism deals with these cases is outlined here). In short, approved-list reliabilism seems to be a way for reliabilists to internalize your suggestion without giving up on reliabilism.

      I don't think, however, that introducing this sophistication quite resolves the awkwardness that I'm concerned with. The way I'm tempted to think about all of this, the core idea behind veritistic consequentialism is something like this: having justified beliefs is an excellent means to achieving the final aim of epistemology, having a large body of beliefs with a favorable truth-ratio. This has been thought to sit nicely with reliabilism (including the approved-list variant) because reliabilists take justification to depend on the reliability of belief-forming processes, which involves having a high output of true beliefs. The grant seeking case is an odd one where we can cheat around justification in the service of the veritist's ultimate aim, true belief. And it seems to me that anyone actually motivated by the intuition behind veritistic consequentialism has strong reason to go off-menu in this case, even if they are attracted to approved-list reliabilism.

      Delete
  2. Clinton, I like this piece quite a bit; my sense of what to suggest is guided partly by the peculiars of the example with the grant-funding.

    I guess I'm attracted to a view that tries hard to discriminate between the following two sorts of cases:

    (1) Agent April pops a pill to get herself to believe that the deadline for the grant is April 1 instead of April 2 since she knows (right now) that if she believes it is April 2, her procrastinating personality will condemn her to waiting too late to get the application in on time with a quality proposal; April is no fool; she wants a higher percentage of true beliefs over false beliefs long-term, and views this minor indulgence in falsehood as the sort of noble lie she must tell herself in service to the truth (much like some of us occasionally set our clocks ahead a few minutes to try to get places on time; while weird and often ineffective, this ploy is a sort of nudge that works by getting ourselves to believe that it's really noon when in fact it's just 11:50am).

    (2) An orthodox theist tries pops a pill to get herself to genuinely believe there is no god, in order to make her grant application more successful-sounding so that she can get a science grant and increase her stock of true beliefs.

    I think (2) is a much bigger deal than (1). Perhaps why is this: the belief in question is not just another belief to stack onto the total pile. It's not a minor thing in one's web of beliefs. For many people it's like that plank in the belief-jungle-bridge that holds a number of other beliefs in place; that brick in the belief-pyramid that keeps the structure from caving in at a crucial part.

    So, then. Why does the gap between (1) and (2) matter? Well, because the example you cite of the atheist popping the believe-in-God pill seems more like (2) than (1). And so a treatment that casually counts the belief in question as a minor price to pay for the total percentage increase is focusing on the overall quantities of beliefs without focusing enough upon the magnitude of importance that each belief has compared to other beliefs.

    ReplyDelete
    Replies
    1. Thanks, Russell! I agree that (2) is a bigger deal than (1), and it seems like a bigger deal on two fronts.

      First, in (2) the theist is dropping a belief that presumably plays a major role in shaping her identity; her belief is plausibly a source of meaning, comfort, hope, etc. Because of this, it seems like we don't want to treat it like just any other belief. I think this highlights a tough problem: how to compare epistemic values against practical ones. And I have no idea how we'd ago about making such comparisons! How do you compare the value of lots of future true beliefs against a commitment that’s constitutive of a large part of one’s identity?

      Second, belief in God can play a large unifying role in one’s web of belief. I think this is more like what you were focusing on. This time, belief in God is like a law of nature in scientific system: it unifies a lot of data, is the source of many true beliefs, and so on. So again, it’s a mistake to treat it like just another belief. This, too, raises a lot of interesting questions. How should epistemic consequentialists think about big-ticket items like belief in God? This hasn’t really occurred to me before.

      These are both really interesting issues to keep an eye on. For the purposes of this blog, I probably should have done a better job of isolating the issue I was worried about from these further issues. But, I’m glad I didn’t! I’m glad they’re on my radar now.

      Delete