Though, in reviewing the incidents of my administration, I am unconscious of intentional error, I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.This seems like an admirably humble thing to say, but one of the philosophically interesting things about it is that it also seems like a reasonable thing to say. That is, Washington does not seem to be describing an unreasonable or irrational attitude about his decisions as president. It is often the case that when examining our actions or beliefs, no one of them seems to be a mistake, and yet we know that we are fallible beings who have likely made at least some mistakes.
The trouble is that certain ways of expressing this general idea lead to puzzling conclusions. Suppose Washington had said something slightly different:
Having carefully reviewed each decision I made as President, I believe of each one that it was not a mistake. Nevertheless, I know that I am not perfect, and so I believe that I must have made some mistakes as President.This also seems like a reasonable thing to say. Having evaluated all of the consequences, obligations, and whatever other relevant factors, Washington might reasonably believe, for example, that appointing Jefferson as Secretary of State was not a mistake. He might then do the same for each other decision that he made until, for each decision he made, he reasonably believed that it was not a mistake. To see the puzzle more clearly, let’s assign a name to each of Washington’s decisions. We’ll call the first decision ‘D1’, the second ‘D2’, and so on. So, we can represent Washington’s beliefs about his decisions like this:
D1 was not a mistake.Given that Washington’s careful examination of each decision has left him with good reasons to think that it was not a mistake, it seems reasonable for him to believe each proposition on the list. However, it also seems reasonable for Washington, aware of his own imperfections, to believe that some of D1-Dn were mistakes.
D2 was not a mistake.
D3 was not a mistake.
…
Dn was not a mistake.
But these beliefs cannot all be true. If the beliefs on the list are all true, then none of D1-Dn were mistakes, and so the belief that some of them were mistakes is false. On the other hand, if some of D1-Dn really were mistakes, then some of the beliefs on the list must be false. More than that, with a little reflection, it should be obvious to Washington that these beliefs cannot all be true, and as a result it does not seem reasonable for Washington to believe all of them. So, now we have a puzzle, a version of the Preface Paradox. Each of Washington’s beliefs seems reasonable, and yet it seems unreasonable to hold all of them together.
And Washington is not alone here. You’re very likely in the same boat. Consider all of your beliefs about some topic—Biology, for example. Supposing you’re a good epistemic agent, each of those is a belief in a proposition that you have carefully considered the evidence for and concluded is true. So, each of those beliefs is reasonable. However, you know that you are imperfect. Sometimes, even after careful consideration, you misread the evidence and accidentally believe something false. So, you have good reason to believe that at least one of your many beliefs about Biology is false. And now you have obviously inconsistent beliefs, all of which seem reasonable. So, what should you do?
I think that you and Washington should keep all of your beliefs, even though you know that they are inconsistent. The trick is to explain why it is reasonable to maintain these particular inconsistent beliefs, even though it is generally unreasonable to have inconsistent beliefs. If I have just checked the color of a dozen swans, for example, and come to believe of each one that it is white, it would be unreasonable for me to believe that some of them were not white. So, what is it about Washington’s situation that makes it different from this swan case?
One interesting difference is that it is reasonable for me to think that if one of the swans had not been white, I would have some sign or evidence of that—if some of them were black, for example, I would have noticed. Washington, on the other hand, not only has good reason to think that he has made some mistakes, but also has good reason to think that he might not have noticed some mistakes in his evaluation of hundreds of complex decisions. But this fact does not seem to prevent him from believing that he would have noticed if, for example, Jefferson’s appointment had been a mistake. He might think, for example:
If appointing Jefferson had been a mistake, he would have been a poor Secretary of State, which is something I would notice. So, if it were a mistake, I would have noticed.Given his careful inspection of all of his evidence about each decision, Washington could give a similar good reason for believing of each decision that he would have noticed if it were a mistake. In fact, the point of carefully inspecting the evidence about each decision seems to be that, in doing so, Washington would notice if it were a mistake.
So, even though, for any decision we pick, Washington has good reason to think he would have noticed if it were a mistake, he still has a good reason to think that he might not have noticed if some of his decisions were mistakes. Perhaps this is what makes it reasonable for him to believe that each particular decision was not a mistake while still believing that some of them were mistakes.
Brandon Carey
Department of Philosophy
Sacramento State
Brandon Carey Dance of Reason
ReplyDeleteResolving paradoxes is a good thing to do when we can. This Washington version of the Preface paradox, unfortunately, is a symptom of a problem with the imprecision in our concepts of belief and knowledge. Because those concepts are imprecise, we should not be surprised that we run into doxastic and epistemic paradoxes we cannot solve.
I have no suggestions for how to remove the imprecision, or confidence that it is removable to our satisfaction, but I do have a suggestion for how to think about the paradox.
When we focus attention on a small set of beliefs, it is much easier to find an inconsistency, if one is present, and to remove it than when we are faced with a much larger set of possibly inconsistent beliefs. With the large set, we often won’t find the heart of the inconsistency even if we suspect it is there and will just live with our set of inconsistent beliefs and can be rational for doing so, even if our beliefs about those beliefs are very clearly inconsistent. I think you and I agree on this, don’t we?
Let Di abbreviate Washington’s decision number i, where i is a positive integer. Then Washington believes the following:
D1 was not a mistake.
D2 was not a mistake.
D3 was not a mistake.
…
Dn was not a mistake.
If ‘B’ abbreviates (i.e., is the modal belief operator on propositions) Washington’s belief, then we have
B(D1), B(D2), B(D3), … B(Dn)
from which Washington can conclude
B(D1) & B(D2) & B(D3) & … B(Dn).
Yet, isn’t it always rational to believe the conjunction of what you believe? If so, he can assert:
B(D1 & D2 & D3 & … Dn).
However, because Washington is not infallible, and the risk of error grows with the number of decisions, it is likely he made a mistake somewhere. Therefore, believing this about his beliefs, he arrives at the reasonable conclusion
~B(D1 & D2 & D3 & … Dn).
So, Washington has good reasons to believe each conjunct of the following contradiction:
B(D1 & D2 & D3 & … Dn) & ~B(D1 & D2 & D3 & … Dn).
The treatment of the paradox requires attention to the size of n. We can see that Washington’s confidence in not believing the long conjunction will become lower as n gets smaller. To appreciate this point, imagine Washington having made only a very small number of decisions in his career. Imagine, that he has made only one decision, so that n = 1 and B(D1). Now the appeal to his not being infallible is going to be much less effective; and, if he is rational, he won’t at all want to say
~B(D1)
because he has much stronger, independent evidence for B(D1) than ~B(D1).
Only if n is very, very large and he has looked for but failed to find the heart of the inconsistency, will he rationally assert
~B(D1 & D2 & D3 & … Dn)
and give up trying to find the bad apple(s) in the barrel of Ds. Yet there is never going to be a single value of n where he goes from knowingly consistent beliefs to knowingly inconsistent beliefs. Instead the transition point is inherently imprecise. That’s the nature of belief.
Are you irrational for holding an inconsistent set of beliefs? Not necessarily. Surely everyone does. Are you irrational for knowingly holding an inconsistent set of beliefs? Again, not necessarily, especially if the size of the set is very, very large. How large? No one can say.
We're definitely in agreement that it is sometimes rational to knowingly maintain an inconsistent set of beliefs, but there are two things I would question:
Delete1) I think I'm committed to denying that it is always rational to believe the conjunction of what you believe. I think B(D1) is rational in part because Washington has good reasons to think that he would have noticed if that belief were false, and the same goes, mutatis mutandis, for B(D2), B(D3), and so on. But I do not think it is rational for him to believe B(D1 & D2 & D3 & … Dn), in part because he does not have good reasons to think that he would noticed if that conjunction were false--he should instead believe that he might not have noticed. I think there are other good reasons to reject this principle, though (e.g. it is reasonable for me to believe of each ticket in a guaranteed-winner lottery that it will lose, but I should not believe the conjunction of these claims.)
2) I'm not sure that size is the (only) issue. It might be reasonable for Washington to accept a handful of beliefs on the basis of good, but far-from-conclusive, reasons. In that kind of case, the relatively weak support for each belief might make it reasonable to think that at least one of them is false, even for a fairly small number of beliefs. It's not obvious to me that it would be irrational for Washington to maintain that fairly small and obviously inconsistent set of beliefs.
This is a bit abstract for me…I’d be more interested in a conversation about irrationality, decision-making, and the current President.
ReplyDeleteBut, on this, here’s what comes to mind.
First, from the quote alone (I’ve not seen Hamilton yet), it seems Washington was making a different claim.
“Though, in reviewing the incidents of my administration, I am unconscious of intentional error, I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.”
In this statement, Washington was not saying that his decisions were not mistakes, but only that he has not made any intentional errors, to his knowledge. This involves the distinction between intentional and unintentional (inadvertent or accidental) mistakes. This also involves an assessment based on what Washington knew as opposed to an assessment based on all things considered.
It is entirely consistent to say, (A) I have made no intentional errors to my knowledge, but it may be the case that (B) I have made many errors.
A is true even if Washington has made many unintentional errors or even if he has made intentional errors outside his knowledge. For example, let’s assume that, as Secretary of State, Jefferson was engaged in some serious acts of corruption, which were unknown to everyone until the present day. With this new information, we can say that Washington’s appointment of Jefferson was a mistake. So A is true and B is true. No paradox.
Second, if we start with your restatement of Washington’s claim, we can offer a different analysis.
“Having carefully reviewed each decision I made as President, I believe of each one that it was not a mistake. Nevertheless, I know that I am not perfect, and so I believe that I must have made some mistakes as President.”
Brad formulated this statement, as the following contradiction (using Brad’s belief operator on propositions):
B(D1 & D2 & D3 & …. Dn) & ~B (D1 & D2 & D3 & … Dn).
I wonder if we can resolve the paradox by pointing to an equivocation. Specifically, I wonder if, by “believe” in the restatement, Washington means two different things: belief about my propositions and a second-order belief about my belief. The first conjunct refers to Washington’s belief about his decisions, that they were error-free. The second conjunct refers to Washington’s assessment about his belief, that he could be mistaken or there could be information to which he was not privy.
One way to analyze this is to suggest a second modal belief operator on beliefs ‘B*’.
So, it may be more accurate to write the restatement in this way:
B(D1 & D2 & D3 & … Dn) & B*~(B(D1 & D2 & D3 & … Dn)).
No contradiction. The second conjunct involves a belief about a belief, a second order belief, an assessment based on all things considered that his belief about his decisions is false.
It may be the case that B*(B(D1 & D2) is more likely than B*(B(D1 & D2 & D3 & … Dn)), but maybe this has more to do with the second order nature of the assessment in the second part of Washington’s statement, which admits of fallibility, rather than just the number of decisions.
You're absolutely right that I'm ignoring Washington's use of 'intentional' and that there are at least a couple of good senses in which a decision can be a mistake. I think we can set up the paradox, though, even if we exclude accidental mistakes and use a fixed understanding of 'error' or 'mistake'. For example, we can imagine Washington looking back on his time in office, considering only the things he intended to do that had the consequences he intended them to have, assessing whether they were, given what he knows at that time, mistakes. Even in that situation, I think we can generate the same problem.
DeleteThe other possible equivocation is trickier, but I think also avoidable. We can certainly imagine that Washington believing that some of his beliefs B(D1)-B(Dn) are false rather than that some of his decisions D1-Dn were mistakes, but we don't have to formulate the paradox that way. We can also set it up so that the beliefs have propositional contents that are straightforwardly inconsistent. Washington could just believe the propositions 'D1 was not a mistake', 'D2 was not a mistake', etc. and also believe 'at least one of D1-Dn was a mistake'. Then, none of the beliefs are about beliefs, but we still get the puzzle.
Great post Brandon, thanks.
ReplyDeleteDo you think this problem persists within a Bayesian framework? It seems to me that it does not. If I hold each of B1-B100 with a credence of .99, and they are all independent, then I will believe the conjunction of all of these beliefs with a credence of .37. There isn't any contradiction here, but it preserves, not only the fact, but the rationality of tending to reject the truth of the conjunction while tending to accept the truth of each individual claim. (This is compatible with what Brad said about the size of the belief set, though he and I are making different suggestions here. He is suggesting that the concept of belief is inherently vague. I am suggesting that it has in fact been explicated quite precisely in Bayesian terms and that one virtue of this explication is that this sort of problem disappears. (Note: I don’t mean that Brad and I disagree. He is probably saying that the ordinary concept of belief is vague, and I am saying that there is a more precise concept that gets rid of the problem.)
The other thing I found myself wondering about is to what extent this problem is affected by whether we adopt an intellectualist or a behaviorist account of belief. On a behaviorist account, to believe that P is, very roughly, to behave in a way that would be rational if P were true. It seems to me that often there will be relatively clear behavioral criteria for specific beliefs, but not so much for the conjunction, especially of a large set of them. This could just be because of complexity, but it might also be that beliefs that are perfectly consistent and epistemically justified on an individual basis can tend to produce behaviors that are more than the sum of their parts when considered together. And often this will be conflicting behaviors. So I wonder if when we deny the conjunction, it is partly because we are noticing this. Washington might be saying: I am prepared to act in accord with the truth any one of these, but I would be reluctant to take any action that depended on the truth of them all.
No, I don't think Bayesians have this problem for just the reason you say--they should already reject the conjunction rule. My main concern with that solution is that it seems reasonable to believe that some of the decisions are mistakes for reasons other than the cumulative probability of error. Washington could think, for example 'I made a similar number of decisions in the eight years prior to becoming President, and several of those were mistakes. So, some of my decisions as President were also mistakes.' This seems like quite a different reason from adding up the many small independent probabilities that each decision was a mistake. So, the Bayesian explanation for why the belief (or >.5 credence, if you prefer) that one of the decisions is a mistake is rational doesn't seem to match up with the actual reason that supports it. That's not necessarily a deal-breaking problem, but it makes me interested in finding a different way out.
DeleteThe behaviorist suggestion is interesting, and you might be right that at least some view connecting belief and action motivates the intuitions here. We might think that there are certain behaviors that would be rational for Washington at the end of two mistake-free terms as President, such as asserting that he has made no mistakes or running for a third term, which he is not exhibiting. This at least indicates that he does not believe the conjunction (although I don't know if it tells us anything about whether he should believe it), even though he did act consistently with each individual belief being true.