Misinformation seems bad, and there seems to be an awful lot of it. Because of that, we might be sympathetic to some of the following claims:
Critical thinking classes should teach students how to identify misinformation.
People who spread misinformation, even unintentionally, are blameworthy.
There ought to be financial, legal, or social consequences for knowingly spreading misinformation.
And if we want to teach people how to identify misinformation, morally evaluate those who spread it, or prescribe consequences for spreading it, it would be useful to have a clear idea of what misinformation is. A reasonable starting point would be that misinformation is just inaccurate information, and so:
(M1) A piece of information I is misinformation just in case I is false.
This is appealingly simple, and it gets many paradigmatic cases of misinformation right. A libelous headline reading ‘Brandon Carey Steals Cats!’ would be false, and it would also be misinformation. But (M1) faces two serious problems. First, many pieces of information, including paradigmatic cases of misinformation, are just not the right sort of thing to be false. A deepfake video is misinformation, but videos don’t have truth values—a video can be manipulated or inauthentic, but it can’t be false. So, since (M1) requires that misinformation be false, (M1) is too narrow. Second, many pieces of misinformation are true. For example, suppose that the ministry of propaganda distributes flyers proclaiming that our glorious leader is undefeated in professional mixed martial arts. If our glorious leader has never competed in MMA, then the information on those flyers is true, but it is still misinformation.
We can avoid both problems by instead requiring that misinformation be misleading in the sense that, whether or not the information itself is true, it will tend to produce false beliefs in those who consume it:
(M2) A piece of information I is misinformation just in case I is misleading.
This is an improvement. Deepfake videos can’t be false, but they are typically misleading in the sense that they will tend lead people to falsely believe that the subject depicted in the video did whatever they are depicted doing. So, (M2) can account for pieces of visual misinformation that do not have any truth value. Similarly, while the ministry’s true-but-misleading propaganda claim poses a problem for (M1), (M2) correctly counts this as a piece of misinformation, since it will tend to produce false beliefs about our glorious leader’s combat prowess.
But (M2) also faces a problem: some clear cases of misinformation will nevertheless tend to produce true beliefs. Suppose, for example, that I use a bunch of Twitter bots to spread rumors that a public figure that I personally dislike has committed tax fraud, even though I have no evidence at all to suggest that they have. Based on the tweets from my army of bots, several people then come to believe that this person has committed tax fraud. These tweets seem like a paradigm case of misinformation that we should teach people to be wary of, blame people for spreading, etc., and yet it is consistent with the story so far that these tweets are not misleading. If it turns out, unbeknownst to me or anyone else, that this public figure has in fact committed tax fraud, then the rumors I’ve spread are not misleading after all—the beliefs based on this information are true!
Similarly, a fake news site motivated by ad revenue may craft thousands of headlines to maximize clicks and engagement, without any regard for whether the headlines are true. Nevertheless, some of those many headlines will coincidentally turn out to be true, and people who come to believe the content of those headlines will thereby form true beliefs. But none of this prevents these headlines from being misinformation, and so being misleading cannot be a necessary condition for being misinformation.
So, misinformation need not be false, and it need not be misleading. What, then, do cases of misinformation have in common? I propose that the key characteristic of misinformation is that it is epistemically defective in the following sense:
(M3) A piece of information I is misinformation just in case a belief based on I cannot be knowledge.
(M3) has several virtues. First, it still gives the right results in the cases that (M1) gets right. If you believe the content of a false headline, that belief will not be knowledge, because knowledge requires truth. Furthermore, (M3) has all of the virtues of (M2) over (M1), since beliefs based on misleading information will also be false and so not knowledge.
But (M3) also avoids the problem of accidentally true misinformation for (M2), since a belief that is accidentally true cannot be knowledge. On the assumption that a tweet from a new account with no followers has an evidential value of approximately 0, people who truly believe that someone committed tax fraud on the basis of my Twitter bots’ tweets will not have knowledge, because their beliefs are not justified. And even if a fake news site copies the format, branding, and other conventions of a known reliable source so effectively that its readers are in fact justified in believing the contents of those headlines that turn out to be true, those readers will still not have knowledge. They will have justified, true, beliefs that they are nevertheless lucky to be right about in a way that is roughly analogous to traditional Gettier cases.
(M3) is still probably not quite right, though. If you believe that I steal cats based on a deepfake video of me doing so, that belief won’t be knowledge because it’s false. But if you instead believe that there is a video that appears to show me abusing cats, that belief is true and plausibly qualifies as knowledge. To refine (M3), I would need to find an appropriate way of distinguishing between the ways in which these two beliefs are based on misinformation, but I don’t know how to do that.
Brandon Carey
Philosophy Department
Sacramento State