Monday, November 30, 2015

Social transparency and the epistemology of tolerance

Last week I learned a new word- apotropaic -and darned if I haven't heard it three times since then!

Everyone is familiar with this sort of thing and has at least briefly experienced it as uncanny. It is called the Baader-Meinhof Phenomenon. Generalized, the BMP is our inclination to mistake an increased sensitivity to P for an increase in the number or frequency of P itself.

Lately I've been thinking about the BMP in relation to social transparency. The free flow of social information is a defining characteristic of the current era, and I tend to be far more sanguine about its effects than most. But I have started to think that the BMP presents a serious challenge to my optimism.

Most of my peers tend to be very possessive about their personal information. They feel like they own their beliefs, ideas, tastes, interests and habits. Consequently, they regard those who acquire knowledge of such without their permission as thieves. They are also haunted by Orwellian metaphors, and tend to react to increasing levels of social transparency in the public sphere with alarm as well. The idea of cameras at every street corner, shop window and traffic intersection feels dirty to them, despite its obvious value for public safety.

I dislike snoops as much as they do, but I distinguish between my preferences and my rights. I see unrestricted access to information as a cornerstone of liberal democracy. For me, the most fundamental human right is the right to learn. Whenever we choose to prevent or punish learning of any kind, there has to be an excellent reason for it. For some kinds of highly sensitive information these reasons exist, but they are consequentialist by nature and do not spring from any fundamental right to control information about ourselves.

I like glass houses. I think a world in which it is nearly impossible to hide the fact that you are an abusive husband or a pederast cleric is clearly preferable to one in which what goes on behind closed doors is nobody else’s business. In a liberal society, there is no greater disincentive to such transgressions than the certainty of others finding out. My friends are all yesbut. As in yes, but this is exactly what concerns them. They follow Orwell in thinking that a socially transparent society is fundamentally an informant society, conformist by nature.

But the evidence is that they are just wrong about this. We are living in a time of unprecedented tolerance for diversity and self-regarding eccentricities. This has not been achieved in spite of increasing social transparency. As long as homosexuals, transgenders, apostates, recreational drug users and the mentally disabled were confined to the darkness of the closet we could ridicule them with impunity. But it is difficult to continue in this vein when the clear light of day reveals that many of them are people we love.

Now here is my concern.

If increasing social transparency is not managed very carefully, it could backfire spectacularly, thanks to the BMP. When social transparency increases quickly, we suddenly become aware of the many intolerable things that have been happening right under our noses. Consequently, we get the impression that the world is going to hell in a handbasket and we become receptive to irrationally harsh responses.

What do I mean by careful management? Two things, at least.

First, it means creating future generations of adults who are more epistemologically sophisticated than mine. We grew up thinking that being responsible and informed citizens meant paying careful attention to reliable news sources, caring about the less fortunate and following our conscience. But that is a serious error.

The news is almost entirely about relating recent interesting events; it rarely provides a statistical context in virtue of which the general significance of these events may be responsibly evaluated. This is why it is possible to be an informed and conscientious citizen by the standards of my generation and still be completely unaware of essential global facts, such as that we are living in a period of unprecedented world peace or that the global poverty rate has been cut in half during the last 20 years.

If we aren’t aware of the role BMP plays in our reaction to constant reports of police brutality against minorities in the U.S, gang rapes of girls in India, the persecution of homosexuals in Russia, the public whipping of atheists in the Third World, and terrorism everywhere, then our reactions are likely to be intemperate and counterproductive.

Second, we are going to need to find the moral strength to punish wrongdoing less severely. What? Yes. To see why, consider that whenever someone decides whether to do wrong she makes an implicit expected value calculation in which the probability of being caught figures centrally. For this reason, the severity of the current punishment is itself a function of the probability of detection. In an increasingly transparent society, the probability of detection rises. Hence the previous levels of punishment are now intemperate and must be recalibrated.

As an example, consider new surveillance capabilities which can detect every single traffic light violation. Many people oppose the proliferation of this kind of technology, despite its obvious ability to save lives. Why? I think it is partly because they foresee an intolerable rise in the cost of innocent mistakes. In this sense, Orwellian concerns are absolutely on point. If we are unwilling to attenuate the severity of our punishments, applying the technology of transparency to crime detection is the road to the police state.

Social transparency has so far been part of the recipe for a more tolerant society, but so far it is tolerance for things that we are learning to hate less. Adopting more temperate responses to crimes we perhaps hate even more than before is a whole nother thing.

I hope future generations will be enlightened enough to do it, but in the meantime some apotropaic magic would come in real handy.


G. Randolph Mayes
Department of Philosophy
Sacramento State

15 comments:

  1. Randy, great post! The more I hear of your pro-transparency views, the more I find myself either taken in or realizing they resonate with me to begin with.

    However, I've started and deleted this comment three times already…and I'm somewhat happy that nobody knows what was in the earlier attempts but me. (And whoever has the keys to discovering all the past keystrokes and screen events on my laptop.) Perhaps one of the nice things about a robust public-private divide is the space to rough draft things, and not always feel like you have to be in final draft mode 24/7.

    There are times, of course, where your rough draft needs to be pretty darn good already--like when approaching a traffic intersection where many other lives are on the line. You can't just plead "relax, man, I was rough-drafting my driving habits when I ran that red…"

    But there are times when people need space. And space rhymes with grace. So they must be close cousins, right?

    More seriously, and relevant to your musings about tolerance. I think we could all stand to take a deep breath or two when considering the context in which a socially disapproved belief or behavior is discovered.

    Plug in your own favorite issues here, but there's a big difference between discovering that a presidential candidate said, and keeps saying, X, or does, and keeps doing, Y; versus a private citizen who once said X when a camera was thrust in her face unprovoked, or did Y when someone else was nearby with an iPhone to capture it.

    ReplyDelete
  2. Russell, thanks for the thoughts, and yes, I agree with what you say.

    As you may remember from previous conversations, I do defend a right to privacy, but not as a right to control personal information. For me personal space is defined by the fundamental activity of persons, which is to think and reason. So for me the right to a personal space is the right to exercise practical reason without interference. It's a way of thinking about privacy that cuts across the private/public divide as traditionally conceived. For example, people have a right to this kind of privacy even when they are in public. If you were sitting on a park bench when you were composing your thoughts, for example, people would be violating this sense of your right to privacy when you notice them reading over your shoulder. Not because they were finding things out about you, but because they were interferring with your ability to think and reason.

    Regarding your final point, I agree completely that this is the kind of maturity that is required of citizens of an increasingly transparent society. As I suggested, some of this comes for free. Partly it is because in a transparent society it is hard for any one person to keep people's attention for more than a few seconds. But also it is that so many things that used to be remarkable are seen as pedestrian. It's definitely a place for grown-ups.

    ReplyDelete
  3. Hi Randy,

    I have a question about your assertion that punishments ought to decrease as social transparency increases. I'm not sure which theory of punishment you're basing that off of, and wonder if you'd enlighten me?

    The reason I asked that is because if the probability of detection rises, and hence the probability of being caught, as society becomes more transparent, then wouldn't the moral desert of the person be increased? In effect, I'm saying that if, knowing that they are more likely to get caught, a wrongdoer still does that wrong, ought that not entail a harsher punishment, given the greater moral desert necessary to overcome the greater likelihood of detection?

    When I think of justice in retributive terms, I'm not clear yet that increased transparency should translate into less severe punishment.

    (I really enjoyed this post!)

    ReplyDelete
  4. Hi Matt, thanks for the question.

    You correctly detect that my stance on punishment is broadly consequentialist, but I think the claim that our punishments should become more temperate as social transparency increases applies even to retributivism because our basic sense of what punishment is deserved is still liable to be strongly influenced by the probability of detection. (This is an empirical claim, and it might be an interesting piece of experimental philosophy to detect just how strong the effect is.)

    I don't have much use for retributivism, but it’s interesting to note that historically it was very much an exercise in temperance. The code of Hammurabi, e.g, which strikes us a barbaric today, was instituted partially to moderate excessive bloodlust. In a state of nature our sense of what people deserve for their transgressions is generally far too severe for the rule of law.

    You seem to suggest two things in your second paragraph, that as the probability of detecting crime X rises, (1) people are more deserving of punishment for crime X; and (2) people are deserving of increasingly harsh punishments for X.

    Regarding (1) I'm not sure I see how the probability of punishment relates to levels of desert. I can imagine saying something like "You knew you were sure to be punished if you did X, so it serves you right." But I think what I really mean by that is that you are an idiot to be surprised or resentful, not that you really deserve it more.

    Regarding (2), it seems like your point is actually more consequentialist than retributivist. It doesn't seem to me that someone deserves a harsher punishment if he is willing to commit X even when the current punishment is virtually certain. (In fact, it’s easy to think of ways in which he may deserve it less, as when he is stealing food to feed his family, despite knowing that he will be whipped for it.) However, it might be said that a harsher punishment is warranted on the grounds that it is not a sufficient deterrent, despite being virtually certain.

    Of course, if a necessary condition of adequate punishment is that, were it certain, it would be sufficient to deter anyone, that would simplify the penal code greatly: death for all crimes. My operating assumption here (which may be wrong in many cases) is that a particular pre-transparency punishment is doing an adequate job for us in terms of hitting the sweet spot with respect to all of the different desiderata of legal punishment. Post-transparency we are going to get a big spike in detection, which will usually mean both an actual drop in the base rate of crime as well as an increase in the rate of detection and punishment.

    I wouldn't necessarily advocate dialing back the severity of the punishment until we have re-established the previous rate of crime. But we should at least dial it back to the minimum level required to preserve the benefit gained by transparency. Suppose the current fine for littering is 300 dollars, and transparency causes littering to drop by 90%. Then the fine should at least be reduced to the point that further reductions would cause a subsequent increase in littering.

    ReplyDelete
    Replies
    1. I can definitely see that sweet spot that you're referring to in your last paragraph. That is much clearer to me.

      I think you nailed what I was going after when you said "it might be said that a harsher punishment is warranted on the ground that it is not a sufficient deterrent, despite being virtually certain." That's how I was thinking about increased transparency. As far as particular cases wherein a person may deserve less punishment, I think that mitigation still would exist within this legal structure, and would so offer the protections for actions taken with less than malicious intent.

      Thanks for the response!

      Delete
  5. Here are some scattered thoughts:

    1. I don’t think I’m pro-transparency, but I am skeptical about a kind of propertarian model that provides for overly strong control rights with respect to one’s personal information. I am for the same reasons that I’m skeptical about overly strong control rights with respect to ideas, which are enforced in intellectual property rules. Understanding either ideas or information as personal property almost seems like a category mistake.

    2. I wonder if there is more evidence for the connection between social transparency and social tolerance. What you say makes sense, but I have more typically understood increasing social tolerance in terms of a changing normative stance that lent itself to the kind of privacy rights you defend and grounded in a general respect for the deliberative capacities of others (even if theirs say to zig where yours says to zag).

    3. I liked your price theory analysis of punishment, but I think your conclusion that our current punishments would be intemperate if we had more social transparency is right only if the current level of violations is optimal. It could be that it isn’t, but ratcheting up punishment in the current situation would make it disproportionate to the offense. But maybe your understanding of optimality already included social attitudes about proportionality.

    4. In any case, people like Mill were more worried about informal social enforcement than formal criminal enforcement. Civil society “practises a social tyranny more formidable than many kinds of political oppression, since, though not usually upheld by such extreme penalties, it leaves fewer means of escape, penetrating much more deeply into the details of life, and enslaving the soul itself.” This seems like the bigger worry associated with greater levels of social transperancy and so the issue in 2 above becomes much more pressing.

    ReplyDelete
    Replies
    1. Kyle thanks for these interesting thoughts.

      1. I guess I don't think intellectual property is quite a category mistake, but it sounds like you may not be entirely convinced of that either. To me a category mistake is sometimes like a mistake in the lab that accidentally creates something of value. We may not even know what it is at first, but we appropriate some words from the existing vocabulary and voila, that’s what it is. Other than that, I like the idea that the extent to which we own personal information is, like the extent to which we own our own ideas, greatly exaggerated and very much subject to abuse.

      2. That's a very reasonable request. My evidence is entirely correlational. You’re right, it makes sense, meaning it’s easy to explain why social transparency would tend make us more tolerant, given that it does. But if intolerance these days were increasing, I suppose we would be able to explain that by reference to increasing transparency, too. So for now I’m just stuck saying that I think it’s a fairly observable phenomenon at a small scale that more open people are with each other, the more trusting and accepting of each other they tend to be.

      3. Good point. I do think- as I’m sure you do- that the only thing that prevents the optimal level of violation from being zero is the costs of enforcement, and one of those costs is the harms resulting from a willingness to impose excessively severe punishment. But I still think you are right. For me, the main point is that we should not punish more than is required to achieve the optimal level, and often this will be less that what is currently required when transparency conditions are instantiated. (What I would actually like to see us do is move over to the Finnish system where personal wealth is transparent and crimes like traffic violations are indexed to the ability to pay.)

      4. I’m not sure how to assess the accuracy of Mill’s claim for the type of society his views helped to bring into being, but which enjoy a level of individual liberty of action and expression regarding which even he may have been a bit uncomfortable.

      Delete
    2. Re: 1, I just meant that (what should be) the primary motivation for having property rights is to have a way to avoid and settle conflicts about who gets to use something in a world of scarcity. Two people who both want to use, say, the same car will inevitably come into conflict and so we need to establish control rights about who gets to exclude who. But two (or more!) people who both want to use the same idea or information don't.

      Delete
    3. It's kind of turning off the road into the cornfield, but I'm not sure I understand why we sholdn't be able to grant property rights for other reasons. Settlng conflict due to scarcity is a great reason. But we might want to grant property rights even when there is no scarcity, just because people get attached to things. Obviously the case for owning ideas, for at least a brief period of times, is the theory that it can stimulate creativity and help to alleviate scarcity. This is probably more a conversation to pursue at Flaming Grill.

      Delete
  6. Randy,

    I've been thinking more about your post since yesterday, and I'd like to share a criticism, or at least a concern, in a rudimentary form.

    What if increasing social transparency does lead to increasing tolerance, but in ways that are really not good? And relatedly, how might we detect when that is happening?

    To illustrate: your mentions (in paragraphs 6 and 7) of people/types that social transparency will affect includes the following:

    1. abusive husband
    2. pederast cleric
    3. homosexuals
    4. transgenders
    5. apostates
    6. recreational drug users
    7. the mentally disabled

    Now, then: you treat 1 and 2 as groups you want social transparency to smoke out so we can more effectively deter/punish them (though possibly with punishment less severe than now). But you treat 3 through 7 as groups you see social transparency as liberating from previous social (and in some cases political) exclusion.

    My concern is that the same mechanism of social transparency is at work in both parts of the list…so why are we confident that the same dynamic that works with 3-8 won't eventually work with 1-2? Or, conversely, that the same dynamic that works with 1-2 will work with 3-8?

    One answer: look and see. Another answer: the distinction between self-regarding and other-regarding eccentricities.

    Both answers have a point. But both answers also leave something out.

    The BMP you discuss is not just something that the righteous use to evaluate how much wickedness is out there in the world. It's also something the wicked us to see how much wickedness is out there to agree with them. "Hey, maybe I'm not going to hell in a hand basket by stoking the flames of my sexual desire for young children...after all, look how many child sex offenders live right in my neighborhood; and look how many people there are in this online community who like that sort of thing; surely they are not all undercover FBI agents…"

    Am I making sense here?

    ReplyDelete
  7. Perhaps I would make more sense in my most recent post (at 11:42 AM) if I proofread it a bit more.

    The point of that post's final paragraph was that the BMP (mistaking an increased sensitivity to P for an increased frequency to P) can do its work whether it's the good guy hearing about the bad guy, or whether it's the bad guy hearing about the bad guy.

    In an age of social transparency, BMP can make some people fear the ISIS apostate-beheader more than they need to, but it can also make other people want to imitate the ISIS apostate-beheaded more than they ought to.

    I remember thinking something roughly like this back when I was an undergraduate. I was watching a video in a non-philosophy class, which was extolling the virtues of social transparency made possible by new technologies. A person interviewed on the video actually testified roughly as follows: "I used to think it was weird, and wrong, for me to want to [X], but when the internet showed me there were a lot more people out there wanting to [X], I realized it was OK." I remember thinking at the time, "Seriously? That's not just invalid, but a recipe for trouble." And I said so in class. Which probably convinced no one. (I don't recall whether X was even identified in the video.)

    One more twist here: the social transparency of today's world can lead to a pernicious casualness, or even callousness, towards things we ought not be either towards. We treat stuff as "ho hum" or "par for the course" or "no big deal" because we have seen how common it is. Such indifference may not be as lethal as the inspiration some people take from truly bad stuff (like the person who actively welcomes wickedness into his life because he sees others doing it, and mistakenly thinks that more people are doing it than really are, and thinks he should get over his scruples and give in to his temptations). But such indifference is still a hazard.

    At the end of the day, I am still a huge advocate of social transparency. But I want to be an advocate with open eyes.

    ReplyDelete
    Replies
    1. Ugh. "ISIS apostate-beheader" in both tokens in paragraph 3 is what I meant. ("beheader" with an R, not "beheaded" with a D).

      Delete
  8. Russell, thanks for the further thoughts, and for your extended careful consideration of mine.

    I just think you are clearly right about this. Social transparency clearly facilitates the creation of all sorts of communities, and some of these will be communities of people who are engaged in practices that we really want to discourage, such as pedophilia or illegal commerce on the Darknet. You have to be right that when these people find each other, especially the marginal ones, their commitment to the interests that bring them together is more likely to be reinforced than diminished as they are welcomed into the fold.

    So the question that arises is what the net effect of this is, given that social transparency also tends to make these people more visible to those of us who disapprove of their actions. I certainly don’t know the answer, but it is important to be aware that both dynamics are working: one that would create an increase in the frequency of P, and one that would create a greater sensitivity to P. It would be facile, of course, to say that they simply cancel each other out, but that is one possibility.

    You also hint at a variation of this problem which may actually be of greater concern to you, and that is the tendency of social media to represent as normal things that really aren’t. For example, people may now have an exaggerated sense of the percentage of people in the population who identify as homosexual. Perhaps a teen who is struggling with his or her sexual identity will be encouraged by this perception and end up identifying as gay rather than straight, thereby increasing the incidence of homosexuality in the general population on the basis of a falsehood. Anyone who thinks this is a bad thing would be understandably concerned about it. (It should be noted, of course, that in the past, the opposite problem prevailed. People struggled against their strong homosexual tendencies based on the misperception that it is an extremely rare mental disorder.)

    Another example (not to make it all about sex) would be the consumption of pornography. I’m pretty convinced that social transparency has generally contributed to the perception that the consumption of pornography by males especially, as well as its concomitant masturbatory behavior, is almost universal. And I’d be inclined to believe that this helps to explain the fact that many TV shows today that many families watch together (e.g., Game of Thrones) often contain what is essentially soft porn.

    Overall I guess I would want to say that this effect is probably constrained to behaviors for which there is reasonable disagreement about their desirability. And probably you get the opposite effect (as I argue in the post) for behaviors that we generally agree to be unacceptable. At any rate, this speaks to the importance of creating future generations of people who at least have the sense to go to Wikipedia and check their subjective impressions regarding the probability and incidence of P against the actual measured frequency of P.




    ReplyDelete
  9. Randy,

    An excellent, thought-provoking post. Also, a really interesting exchange with Russell. I'm inclined to agree with his point that greater transparency may encourage undesirable behavior as well as helping us to become more tolerant. But I think we might perhaps distinguish between transparency and publicity. Publicity is something that people choose and use, and it is this, perhaps, that is more likely to be involved in encouraging undesirable behavior (e.g. terrorism).

    One other thought. You hope for the creation of "future generations of adults who are more epistemologically sophisticated than mine. We grew up thinking that being responsible and informed citizens meant paying careful attention to reliable news sources, caring about the less fortunate and following our conscience. But that is a serious error."
    I don't go along with this. I understand and accept your basic point that due to the BMP effect and other factors, we often embrace dubious conclusions, letting the heart rule the head, not correcting epistemological distortions, etc.. But I think quite a lot of news sources are becoming more responsible in the way you'd like to see. I don't have hard data to back this up, so I may be unwittingly illustrating just the kind of thing you warn against. But it seems to me that the NYT, NPR, the BBC, the New Yorker, Harper's, FiveThirtyEight, 3QuarksDaily, etc. do more these days (compared to 20 years ago) to report the findings of science and social science, and to contextualize it. So a typical article on poverty or incarceration in the US today will not just spin anecdotes about some homeless or jobless folks; it will typically include plenty of statistical info. And it's often the hard data more than anything else that underlies the outrage of liberals with respect to poverty, incarceration rates, etc..

    ReplyDelete
  10. Hi Emrys, great to hear from you. I'm getting a little tired of saying this in response to comments on this piece, but I agree with you. The news, at least the sources you cite, does seem to be gradually becoming more statistically informative. Though spending a few days watching local nightly news might change your tune a little.

    I guess one thing I would say in defense of my claim above is that the news really is event driven, and that's an inherent limitation. Even the sources you mention (with the possible exception of 538) linger over the lurid details of something like the Paris attacks, for the simple reason that people are interested in it as a story, and this can't help but exaggerate its significance. I'm not blaming the news organizations for doing this, they presumably understand the needs of their clientele. It's just not a way for an indvidual to develop an informed perspective.

    Hans Rosling has a nice TED video in which he polls a TED audience on basic statistically oriented multiple-choice questions such as "What is the average number of years that women age 30 have spent in school?" and he compares the answers of the TED audience to those of Swedes. The TED audience actually outperforms Swedes, but both are outperformed by monkeys. In fact Rosling explains the disparity by reference to the fact that monkeys don't read the news. So that is not so good, I think.

    Apropos of nothing you said, it's also not all that clear to me how much value results from being statistically informed. I know very intelligent people, for example, who are fully aware that drunk drivers are vastly more dangerous than terrorists. But they just don't experience much cognitive dissonance at being apoplectic about 10 children killed on a playground, but only mildly interested by reading about the thousand or so that are killed every single year by drunk drivers. It's not clear to me that they would choose to allocate resources for dealing with these problems in a rational way.

    ReplyDelete