Monday, February 22, 2016

Seriously lucky knowledge

The word knowledge, like most words, has many different meanings. Philosophers who claim to study the nature of knowledge - epistemologists, we call them - know this, but most assume that there is one serious meaning, and that this is the one we philosophers have had a bead on since antiquity.

I say no.

Consider the idea of common knowledge. It was once common knowledge that a variety of diseases were curable by bloodletting. Today it is common knowledge that no diseases are curable by bloodletting. Epistemologists are widely agreed that to seriously know that P, it must actually be the case that P. So it follows that common knowledge is not a serious sort of knowledge.

But serious to whom? Scientists are seriously interested in the nature of common knowledge, because it figures centrally in their attempt to understand human cooperation. There are several other serious uses of the term to be found within distinct scientific enterprises. For example, in information science an entity is typically said to have knowledge to the extent that it has information it can put to use. But the usability of the information that P is insensitive to whether P is in fact the case.

The concept of knowledge that interests epistemologists, then, is not the only one that interests people involved in careful, serious and systematic inquiry. Rather, it is simply the one that is meant to handle problems that are of particular interest to epistemologists. We shouldn't pretend differently.

The mother of all epistemological problems is universal skepticism. When the skeptic claims that we know almost nothing about the world, what she is usually saying is that we lack any rational basis for ruling out- or even regarding as improbable -any number of skeptical hypotheses according to which reality is radically different than we believe it to be. In this context, when we object to the skeptic that we do have knowledge about the world, we are asserting that the world is not radically different than we believe it to be, and that we have excellent reasons for saying so.

This is the legitimate origin of the traditional, though now widely disputed analysis of knowledge as justified true belief. It says that I know that P iff:
  • I believe that P;
  • I am justified in believing that P;
  • P
If you’ve been exposed to contemporary epistemology at all, you’ve heard about the Gettier Problem, which can be summarized concretely as follows. Suppose there is a pig in the sty, Wilbur, but you do not see him. What you see is an incredibly life-like pig robot that Fern has put in the pen to keep Wilbur company. In such a case you believe that P (pig in the sty), you are justified in believing that P, and P (because of Wilbur.)

Now, it takes some work, but most philosophy students can be made to agree that this is a counterexample to the above definition. That is, that it clearly satisfies the traditional definition of knowledge, but is clearly not a case of knowing there is a pig in the sty. Why? Well, the crude answer is that you just got lucky. Epistemologists count on us to have a strong feeling that this sort of luck is incompatible with the serious sort of knowledge that we all have in mind.

The Gettier Problem was a seismic event in philosophy, triggering a massive rethink of the concept of knowledge. Almost all of the proposed solutions- and none have emerged victorious- involve creating some condition that would obviate the luck that Gettier counterexamples require.

I think this has been a mistake.

To see why, let’s go back and look more closely at the skeptic’s challenge. She informs us that we have no rational basis for ruling out skeptical hypotheses. Fair enough, that deserves an answer. But there is nothing in this challenge that requires us to build this justification condition into the definition of knowledge itself. It is perfectly open to us to stipulate that whether we have knowledge is one thing; whether we are rationally justified in believing we have it is another. The skeptic's question goes to the latter.

What if we simply removed the justification condition altogether? Then we would end up with a very simple definition of knowledge. I know that P iff:
  • I believe that P;
  • P
If philosophers adopted this definition of knowledge, the problem of skepticism would remain to be dealt with, as it should. But the Gettier Problem vanishes. You do know there is a pig in the pen after all. Lucky you.

It's important to see that we would lose no descriptive power by adopting this definition. We would simply accept the implication that some instances of knowledge are seriously lucky, which means the claim that someone knows that P does not entail an endorsement of the means by which she came to know that P. This becomes a distinct issue.

You'll be unsurprised to find few epistemologists on my bandwagon. For most of my tribe it is just axiomatic that you can not come to know something as a result of, say, blind faith or random guessing. (Students who suggest otherwise are scoffed into submission.)

Note well that I do not suggest that this analysis is completely intuitive. Even to the unindoctrinated, it may sound odd to say that I know there is a pig in the sty under the Gettiered conditions described above. But here is my point: the analysis does not have to be completely intuitive. It just has to satisfy the aims of epistemology. Every serious form of inquiry employs a vocabulary in which familiar sounding terms are given technical meanings for the purpose of clarity and theoretical fecundity.

This is why it is so important to recognize that there is no one serious concept of knowledge. It frees epistemologists to adopt the one that will allow them to get serious about solving the problems that really concern them.

G.Randolph Mayes
Department of Philosophy
Sacramento State University
www.grandolphmayes.com


15 comments:

  1. Here's a question and an objection:

    Question: Why think that any skepticism remains on this definition? Suppose a skeptic challenges some belief I have in some external world proposition, E, claiming that I do not know that E. I respond: "I believe E, and E. That's all it takes to know that E. Therefore, I know that E." The skeptic might go on to ask how I know that the second condition is satisfied (that is, how I know that E), but, given this definition, that is simple: "I believe that E, and E. That's all it takes to know that E. So, that's how I know that E." The skeptic might instead ask what rational basis I have for believing E, but I can cheerfully reply: "None. Nevertheless, I know that E. Knowledge doesn't require a rational basis for belief." Finally, suppose the skeptic asks what rational basis I have for believing that I know that E. Here again, I can reply: "None at all. Nevertheless, I believe that I know that E, and I know that E. That's all it takes for me to know that I know E. Therefore, I know that I know that E, which is pretty good." So I know that E and I know that I know that E. I don't have a rational basis for E or for the proposition that I know that E, but knowledge doesn't require that. So, how can the skeptic challenge my knowledge if we both accept the definition?

    Objection: Propositional knowledge (of any kind) requires a reason, but on this definition, I can know that P even if I have no reason for P. Suppose I flip a fair coin, but don't look at it. I guess that the coin came up heads (call this proposition ‘H’) and form the corresponding belief that H. As it happens, the coin came up heads, so H. By your definition, I know H, but that seems wrong for three reasons. First, reasoning from known propositions provides a reason, but reasoning from H does not. Suppose I reason: 'I guessed that H, and H. So, my guess was correct.' I know that I guessed that H, and I know that this is a good inference pattern. If I also know H, then this argument should give me a good reason to think that my guess was correct, but it doesn’t. Second, it is normally reasonable to act on known propositions. Suppose some huge practical consequence is tied to the coin flip. If the coin came up heads, then I win a billion dollars, say. Barring other considerations, if I knew that H, it would be reasonable to act as if I had just won a billion dollars. So, I could reasonably quit my job, commit to buying some houses, etc. But this is not reasonable, since I have no reason to believe H. Third, investigating the world seems like it allows us to increase our knowledge, but on this definition, it does not. Suppose I check the coin and discover that it came up heads. It seems like I’ve learned something about the world (namely, H), but by your definition I knew all along that the coin came up heads. And if I want to know about the world, I needn’t bother investigating or looking for evidence—forming a bunch of random beliefs is the quickest way to gain knowledge. Maybe these are not reasons to think that this isn’t knowledge. Maybe they are just reasons to think that this isn’t the kind of knowledge relevant to solving problems about reasoning, action, and investigation. But what reason is there to think it’s knowledge at all? Why not just continue calling it ‘true belief’?

    ReplyDelete
  2. Hey Brandon, thanks for the thoughtful feedback.

    Regarding your question, I think anyone who responds that way to the skeptic doesn't understand it. The skeptic in this case is asking: On what rational basis do you assert E? To simply repeat E is to beg the question. I think it sounds like a problem just because the expression "How do you know?" is being repeated. My proposal changes the way the skeptic's question has to be asked, but it does not make it any easier to answer.

    Regarding your objection, I don't accept your initial stipulation. In fact, it seems to me that rejecting the requirement for justification just is the rejection of that stipulation. What I don't reject is that it is normally perfectly legitimate (and often irrational not) to ask for a reason when someone claims to know something. But this does not imply that the reason itself is part of what it is to know. I think building this requirement into the concept of knowledge is what causes all the problems.

    With regard to your specific points I respond as follows.

    1. Reasoning from known propositions provides a reason.

    Response: This is only true if knowledge implies justification. Substitute "justified propositions" for known propositions. It's much clearer.

    2. It is normally acceptable to act on known propositions. Same response as 1.

    3. Investigating the world seems like it allows us to increase our knowledge, but on this definition, it does not.

    Response: This isn't generally true, of course. It is true only for the kind of example that you cite, viz, when you already know a proposition H in my sense, by truly believing it, but without justification. On my view, if you truly believe H, then you know it. But that does not mean that you have any reason to believe or assert that you know it. It is just a fact about you. Going on to acquire a justification or reason for H is still a very valuable activity, as it provides you with the ability to inform other rational agents.

    Regarding your final point, that this doesn’t seem like the kind of knowledge that helps to answer questions like this, so why call it knowledge at all?

    Response: I claim that as a result of adopting this way of speaking, we solve, or dissolve several questions (the Gettier problem and the value problem, specifically) and we lose no precision in our ability to deal with the questions that remain, in fact we gain it. True belief is an extremely valuable thing. It is just as valuable if you come across it by luck as it is if you come by it honestly. That is (partly) why we should incline toward calling it knowledge.

    ReplyDelete
  3. I agree with a lot of what Brandon said. What is the use of your definition of knowledge if people can't even rationally use it themselves? On your view, I can't reasonably know that I know anything. I might very strongly believe that H, but I can't reasonably demand my million dollars. This very situation would make me realize that I don't really know H. Finally, I check the coin, and it was a head. Then I think, I was right all along-my belief was correct, but I didn't really know that until now.

    Randy, maybe you can offer another example of the use of your definition - what is the advantage of seeing it your way to non-epistemologists?

    ReplyDelete
  4. Dan, I don't know what you mean when you say that on my view I can't reasonably know that I know anything. I wouldn't say it that way, of course. I would say that you can reasonably (and unreasonably) believe that you know things. So, in Brandon's example, I would say that I may know it, but I don't have the resources for convincing anyone else of that, so I can't demand the money. The fact that I don't have those resources doesn't entail that I don't know it. That's just the way your corrupted intuitions about knowledge are requiring people to speak. There is nothing I can't say that someone using a JTB conception can say. I can just say it more clearly because it doesn't involve me in unnecessary self-referentialy.

    Non epistemologists aren't the problem. Most students are happy to call true belief knowledge from the very beginning. We are the ones who have to convince them otherwise with examples based on a flawed philosophical sense of genuine knowledge, not unlike the flawed philosophical senses of genuine happiness or meaning.

    Regarding your initial "What is the use...?" question, remember that it internalism is virtually dead in epistemology already. Externalists already stipulate that whether or not you have knowledge is a fact about you, not one that is inherently accessible to you.

    ReplyDelete
    Replies
    1. I think I understand the view better now, but I'm not sure I see the theoretical advantages. Is this the idea?:
      Beliefs are still reasonable and unreasonable, and my reasons determine what it is appropriate for me to assert, how I should act, etc. Knowledge is an entirely different kind of belief--the true kind. So, I can have reasonable and unreasonable knowledge, and sentences like 'Sally knows that class is cancelled, but she shouldn't believe it' are perfectly coherent.

      If that's the view, then I'm sympathetic to Dan's question about the use of knowledge. Reasonable knowledge still seems useful, since I can use what I reasonably know in practical and theoretical reasoning, it's appropriate for me to assert propositions that I reasonably know, etc. But unreasonable knowledge can't fill this role, since it doesn't require any reasons. Is there something else that it is useful for or some other reason I should want it? It seems to me the answer might be yes, but I'm not sure what kinds of uses you have in mind for knowledge.

      As to the question of value, if reasonable beliefs are also valuable (which it seems to me that they are), isn't reasonable knowledge more valuable than just plain knowledge? If so, then it seems false that knowledge is just as valuable whether you come by it honestly or not--reasonable knowledge seems more valuable than unreasonable knowledge.

      Here's an alternative proposal that will maybe help me see what theoretical virtues your definition has. Suppose I give this definition:
      S knows that p just in case S has a rationally justified belief that p
      This definition also has no Gettier problem and only one kind of epistemic value. Instead of reasonable and unreasonable knowledge, we have true and false knowledge. Any knowledge can be appropriately used in reasoning and (I think) appropriately asserted, but true knowledge still seems more valuable to me than false knowledge. Is this definition any better or worse than yours?

      Delete
  5. Brandon, interesting points, thanks.

    Your summary of my view is correct.

    Regarding reasonable knowledge: Of course, you can use whatever propositions you like in practical and theoretical reasoning. So I take your point to relate to the conditions of success. If you are reasoning on your own, then a true belief is just as good as a justified true belief as far as producing reliably true inferences. If you are reasoning as part of a cooperative enterprise with rational agents, then typically you need to justify beliefs that aren't obviously true or you will fail to make a positive contribution.

    You ask whether unreasonable knowledge, by which I take you to mean utterly lucky true belief, is useful. Hell yes, it is useful. If you know that you have two dollars in your pocket, then you can buy a coffee at Starbucks. If you know the way out of the forest, then you can save our butts. True belief is unbelievably utterly awesome, and epistemologists should be flogged whenever they put the word "mere" in front of it.

    I'm not sure what you think the value of reasonable false beliefs is. I think we value reasonable people for their ability to produce true beliefs, but I don't think we particularly value the false beliefs they produce by rational means. I would rather have a true one produced by irrational means just as I would rather have a good cup of coffee made by a bad coffee maker than a bad cup made by a good one.

    But you might be saying reasonable true beliefs are more valuable than lucky true beliefs. That's of course the value problem in epistemology. But I'm ok with this assertion. There is no part of my view that requires anyone to accept that true belief is the only epistemic good. As noted above, a justified true belief has greater value than a true belief partly because it becomes easier to replicate in other rational agents. But there is no reason to make this good, or any other good, a necessary condition on knowledge itself. It just means that there are certain dimensions along which knowledge can be graded.

    The idea that knowledge is rationally justified belief is already part of the lexicon. For example, when we speak of the knowledge of ancient astronomers, that’s what we are basically talking about. So that, too, is a perfectly good way to analyze knowledge for particular purposes, but I don’t think it a very apt one for epistemology. The goal of rational inquiry is true belief. The problem of skepticism is whether any of our beliefs are true, not whether they are justified.

    ReplyDelete
    Replies
    1. Thanks for responding. I'm teaching the traditional analysis next week, so this is helping me find more motivation for objections.

      I actually meant the 'can' in 'I can use reasonable knowledge in reasoning' to be epistemically normative. It's possible for me to reason from any belief at all, but I can only rationally reason from reasonable beliefs. If I base new beliefs on knowledge that I have no reason for, I'm no longer being reasonable. Whether I end up with a true belief (which I obviously would if the inference is valid) is another matter.

      I'm not sure I follow why unreasonable knowledge can be useful in the way you suggest given what you've said about requiring rational justification for assertion/action. How can I use my knowledge of how to get out of the forest to get out of the forest if I have no reason for it? I won't be able to offer you any reason to think that it's the right way out, and you'll have no reason to follow me. In fact, I can't even offer myself a reason to think that I know the way out of the forest (although in fact I do). It's true that if I irrationally followed my belief, I would be successful in getting what I want, but why would I follow my irrational belief? That, I think, is why unreasonable knowledge seems useless to me--in order to use it, I have to act on a belief without having any reason to think that it's true.

      Regarding value, if knowledge can be graded along a dimension of rational justification, and rational knowledge is more valuable than irrational knowledge, doesn't it immediately follow that irrational knowledge is not just as good as rational knowledge? If so, then aren't true beliefs that are lucky (in one sense, anyway) less valuable than true beliefs that are not lucky?

      I'd like to talk to you more (although not necessarily here) about how you understand skeptical challenges. It seems to me that many skeptical arguments don't challenge the truth of any belief, but rather the justification. E.g. inductive skeptics don't seem to deny that the next raven will be black, but only that we can know or be justified in believing this based on previous observations.

      Delete
  6. Brandon, thanks for the continuing dialogue. I have asked my epistemology class to consider your points carefully.

    I agree with you completely that a person is not being reasonable if he bases his reasoning on beliefs that are contrary to existing evidence. That is how I would capture what you said. (I have to refrain from saying "beliefs without evidence" for reasons I won't get into, though I'm sure you are familiar with them.) If knowledge is true belief, it simply does not follow from the fact that you know that P, that you rationally believe that P.

    Your view (and Dan's I think) seems to be that it is sort of axiomatic that if you know P you are entitled to reason with P. And, of course, I realize that there are many more people who agree with you than with me. However, I tend to think this idea is a relic of internalism. Consider an externalist view that knowledge is reliably produced true belief, which is a view that many sober-minded epistemologists hold. Well, the fact that your belief has been reliably produced is just as cognitively inaccessible as its truth. So this view also does not support the view that to know that P is to be warranted in using it as a reason for anything.

    Regarding your question about using knowledge to get out of the forest, I think the answer is just very simple. If I have a true belief about how to get out the forest, I’ll be able to use it to get out of the forest. It seems to me that you are asking how I will ever be able convince other rational agents to follow me. (Clearly it would have to be rational agents, since, as you know, people who just show overwhelming confidence about how to get out of the forest with no reasons of any kind, Donald Trump for example, acquire devoted followers quite easily.) But if we do in fact act on my belief, we will get home, and it is the fact that the belief was true that will explain that.

    I definitely do not claim that knowledge that is not rationally based is as good as knowledge that is rationally based. What I am saying is being rationally based isn’t what makes it knowledge. Similarly, I would not say that an unkind human is just as good as a kind human, but being kind isn’t what makes us human. By the same token, I agree that true beliefs that are lucky aren’t as good as true beliefs that aren’t lucky. But I would say that same thing about, say, useful beliefs. That doesn’t tempt me to analyze knowledge as useful true belief.

    In sum, knowledge is true belief. How we justify our claims to know is an independent and very important question in epistemology. What sorts of processes reliably produce knowledge is yet another independent and equally important question in epistemology. I think it is far clearer to separate these questions rather than to fold them in to the nature of knowledge itself.

    ReplyDelete
    Replies
    1. Ok, I think I just misunderstood you on value earlier.

      It wouldn't be surprising if some of my intuitions turned out to be relics of internalism, since I'm some sort of internalist. At least, I think that facts about justification (which I take to also be facts about knowledge) supervene on a subject's mental states, which are in some sense internal. Reasons seem to me to be mental states, and which reasons you have determines what you are rationally entitled to believe. This doesn't seem to require anything about cognitive access, though. I have reasons to believe that there are some dolphins on Earth, but I wouldn't say that I have cognitive access to that fact. I don't even have perfect cognitive access to the reasons (e.g. I could have a reason and not be aware that I have it). Reliabilist accounts of knowledge seem mistaken to me (i) because of the generality problem and (ii) because of cases like Truetemp--Truetemp is always right about the temperature, but he has no reason to think so, and so he does not know. I expect you have a different view of that case, though, or at least would prefer to use different language to describe it.

      Part of the dispute about the usefulness of knowledge might rely on different senses of 'useful'. Suppose I have some problem with my computer, and there is a program P that, if I run it, will solve that problem. However, I have never heard of P and have no reason to think that it will solve my problem. Is P useful? In one sense, obviously yes. Were I to run it, it would work perfectly to achieve my goal. In another sense, though, P is of no use to me, because I have no reason to run it. I can only use it to solve my problem if run it despite having no reason at all to think it will work. The forest case seems roughly analogous to me. I have a true belief about how to get out, and if I act on it, I will be successful. So, it is useful in the first sense. But since I have no reason to think that the belief is true, I can act on it only by acting irrationally--by heading West, say, with no reason at all to think that that is the right direction. This is the sense, I think, in which your irrational knowledge is not useful to me. Since I try to act rationally, I won't be able to avail myself of my knowledge in navigating the world. Of course, rational false beliefs also have a problem in that, although I will be in a rational position to use them, they won't work (since they're not true). This is one reason that folks might want beliefs that are both true and rational, since you can use them both rationally and successfully.

      I agree that truth, justification, and reliability are all distinct, but I'm not sure why adopting a particular definition of 'knowledge' separates these questions any further. It seems to me that we can still talk about all of the same states either way. On the traditional definition, we have true belief, justified belief, justified true belief, and knowledge. On your definition, we have knowledge, justified belief, justified knowledge, and justified, un-Gettiered knowledge. Is one of these ways of talking obviously clearer than the other?

      Delete
    2. Brandon, just in reply to your last question, I say yes. I think the Gettier problem and the value problem are just not real problems; they are problems that we have created by insisting on folding the justification and/or reliability conditions into the definition of knowledge. There is simply no good reason for it. We can do all of the inquiry we need to do by adopting the view that knowledge is simply true belief.
      I am inclined to believe, though I have certainly not argued for it here, that philosophers have created a concept of “genuine knowledge” that is incoherent in the way that some of our other favored notions are, such as genuine free will or genuine goodness or genuine concepts. As Daniel Dennett remarks about magic, it turns out that the genuine kind is the kind that is impossible. In the case of knowledge we have decide that real knowledge has to perform a magic trick; i.e., it has to be self-justifying.

      Delete
  7. Brandon, I neglected to respond to your point about inductive skepticism. I don't feel committed to the view that all forms of skepticism concern the truth of our beliefs, but it seems to me that in general the reason we raise questions about the justification of our beliefs is because we are concerned about the truth of those beliefs. This seems true of inductive skepticism to me. You're right that inductive skeptics don't deny that the next raven will be black, but they have to be questioning whether it is true that they will always be black, or their concerns about our inductive practices make no sense. Of course, most people who work on the problem of induction aren't skeptics, just like most people who work on the problem of universal skepticism aren't.

    ReplyDelete
    Replies
    1. It may be that we are ultimately concerned with truth whenever we raise questions about justification. E.g. it may be that the reason it is bad for us to have unjustified beliefs is that they're more likely to turn out false or something like that. I was just thinking that the truth of our beliefs is not what the skeptical arguments directly challenge. The inductive skeptic's argument seems no worse once we discover that the next raven is in fact black, since the challenge is that we were not justified by induction prior to observing the raven. As you say, I don't think the point of thinking about inductive skepticism (or any other kind) is that we are or should be skeptics, but rather to figure out why induction (or deduction, perception, etc.) can justify our beliefs even though we (reasonably!) know that they might be false.

      Delete
    2. Brandon, yes I agree with you that the truth is not what is directly challenged by the inductive skeptic's argument. But it has to be the ultimate concern, since epistemic rationality is all about truth-seeking. At any rate, I think it is still clarifying in this case to be able to think of justification as a distinct epistemological issue.

      Delete
  8. Interesting conversation, in both the original posts and the comments! I do have a question: how would you respond to the intuitive sentence "You don't know it, you just believe it"? Of course that sentence works if the speaker thinks his audience is in fact wrong (so they believe P, but in fact not-P). On the other hand, I think one say that sentence without knowing whether or not P; in that case, it's because the speaker doesn't think the audience has a good reason to believe P.

    Put another way, there are norms governing particular epistemic practices. Do you think that those norms are only warranted if they pick out practices which reliably generate true beliefs? Or might there be some other grounds for those norms as well, such as coherence or reasonableness? (contra Zagzebski and others)

    ReplyDelete
  9. Hi Ian, thanks for the question.

    I'm probably missing something, but those strike me as different questions, so for now I'll answer them separately.

    I'm not sure how common it is for people to say "You don't know P, you just believe P," when they are not skeptical of the truth of P. Philosophers find it very easy to make sense of it, of course. But, in the end, I'm happy to allow that that use exists in non philosophical English, along with many others. My suggestion is that the "true belief" conception is the most apt for the specific purposes of doing epistemology.

    Regarding your second question, I think it depends on how we are thinking of epistemic practices. A lot of us just define epistemology narrowly, so that reliability for truth is the fundamental aim. My own inclinations is to say that we evaluate theories primarily with respect to their explanatory power. Predictive accuracy figures centrally here, but so do other things like resolution, frugality, significance and, as you mention, coherence.

    ReplyDelete