Tuesday, October 11, 2016

Extinction or unfair survival of a few?

There seems to be something especially bad about the humankind going extinct. Human extinction appears significantly different from the extinction of any other species, so its badness is not only about the loss of an entire species. And it is qualitatively different from just having most people on Earth die, so its badness goes beyond the loss of a large number of human lives.

The rapid development of technologies that are as powerful as fragile (e.g. nuclear weapons, genetically modified organisms, superintelligent machines, powerful particle accelerators) have made some people (e.g. Nick Bostrom) worry (a lot) about human extinction. According to them, it is not the possibility of a giant extraterrestrial entity impacting the Earth that might be the biggest threat to human existence, but the possibility of our human-made technology going wrong, either due to some intentional misuse, or to our losing control over it (e.g., a too-intelligent but amoral machine taking control of humans; a self-replicating nanobot that eats the biosphere). Human extinction is the top of the so-called existential risks, which are receiving increasing attention. Centers and institutes have recently been founded to study existential risk and the threat to humans that new technologies pose (here, here & here.) According to some extinction-worried philosophers, existential risk, and in particular human extinction, is the worst sort of risk we are exposed to, because it destroys the future. And we should worry about it. More importantly, according to them, preventing this risk should be a global priority.

I would like to share with you some thoughts about human extinction – thoughts that, I confess, are not motivated by worry but by philosophical curiosity. Let’s consider a comment by Derek Parfit (when reading it, you can fix his sexist language substituting “mankind” for “humankind”):

“I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:
1. Peace. 
2. A nuclear war that kills 99 per cent of the world’s existing population. 
3. A nuclear war that kills 100 per cent.
2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences?” (1984, 453).

Parfit states that while for many people the greater difference lies between scenarios 1 and 2, he believes the difference between 2 and 3 to be “very much greater”. He argues that scenario 3 is much worse than scenario 2, and not only because more people would die, but because it destroys the potential of the millions of human lives that could live in the future. Assuming we give value to human life, that means loosing a lot of value. And even more so if we attribute value to what humans do (the art they create, the technology they design, the ideas they generate, the relationships they build). Scenario 3 destroys and prevents a lot of value. Extinction-worried philosophers conclude that preventing scenario 3 should be humanity’s priority.

Let’s now add a twist to Parfit’s scenarios. I take Parfit’s scenario 2 to assume that the 1% who survive are a random selection of the population: during the nuclear explosions some people might have accidentally happened to be underground doing speleology, or underwater, and as a lucky consequence survived. Let’s modify this element of randomness:

1. Peace 
2. Something (a nuclear war or any other thing) kills 99% of people, and the 1% that survives is not a random selection of the Earth’s population. The line between the ones who die and those who survive tracks social power: the survivors, thanks to their already privileged position in society, had privileged access to information about when and how the nuclear catastrophe was going to happen, and had the means to secure a protected space (e.g. an underground bunker, a safe shelter in space). 
3. Something kills 100% of humans on Earth.

These scenarios raise at least two big questions: is 3 still much worse than 2?; and should we prioritize preventing it?

Let’s focus on the second question. I hypothesize that (i) the probability of a scenario like 2 (i.e. a few people survive some massive catastrophic event) is at least as high as that of 3, and (ii) the probability of a non-random 2 is higher than a random 2. We can tentatively accept (i) given the lack of evidence to the contrary. In support of (ii) we just need to acknowledge the existence of pervasive social inequality. The evidence of unequal distribution of the negative effects of climate change (here and here) can give us an idea of how this would work

If this is right, then human extinction is as likely as the survival of a selected group of humans along the lines of social power.

Extinction is bad. Now, how bad is a non-random 2? And how much of a priority should its prevention be? Unless we agree with some problematic version of consequentialism, non-random 2 is pretty bad: it involves achieving good ends via morally wrong means. Even if it were the case that killing everyone over fifty years old would guarantee the well-being of everyone else, most would agree that killing these people is morally wrong. “Pumping value” in the outcome is not enough. Similarly, even if non-random 2 produces the happy outcome of the survival of the human species, the means to get there are not right. We could even say that survival at such price would cancel out the value of the outcome.

My suggestion is to add a side note to extinction-worried philosophers’ claims that avoiding human extinction should be a global priority: if the survival of a selected group of humans along unfair lines is as likely to happen as extinction, avoiding the former should be as high a priority, and we should invest at least as much resources in remedying dangerous social inequalities as we do in preventing disappearance of the human species. I personally worry more about the non-random survival, than about extinction.

Saray Ayala-López
Department of Philosophy
Sacramento State

17 comments:

  1. This is interesting, Saray, but I think I'm confused by the second-to-last paragraph. Why do you say non-random 2 is a good end achieved by utilizing a means that's morally wrong? It seems to me that 2 (whether the survivors are a random selection of the population or not) is a very bad, rather than good, end.

    I think you just meant that non-random 2 is thought to be a good end relative to 3, in the sense the more people survive. I get that. You suggest, though, that we shouldn't endorse non-random 2 over 3, because it assumes a problematic version of consequentialism, because non-random 2 is achieved via some morally wrong means.

    But now I don't see what the means utilized to achieve the 1%'s survival are supposed to be. I see what the morally wrong means are in the other case -- some actor kills everyone over 50 in order to secure the well-being of everyone else. Is something like that going on in non-random 2? It's true that 99% will die, but how is their deaths used (by whom?) as a means to the survival of the 1%?

    ReplyDelete
  2. Oh, here's a separate question about your last paragraph. If non-random 2 is as likely to happen as 3, and if preventing the former is as high a priority as preventing the latter, it still might not follow that we should devote as many resources to remedying social inequalities as we do to prevent human extinction. Because, it's possible (likely?) that the best way to prevent non-random 2 is to pursue policies that prevent 3.

    ReplyDelete
    Replies
    1. Great post, and good questions from Kyle. Maybe a response to this particular question is that we should do ideal theory first and more real-life-apt theorizing second. It may make a difference to our real life policies if we agree that certain versions of non-random 2 are worse than 3.

      Delete
    2. Kyle, thank you for your questions. I see the problem in identifying "the means" in scenario non-random 2.
      a possible pathway to explore (which could eventually lead to an answer) might be that there is a zero sum in the game of social power and similarly so in the game of survival, so for every person higher up in the social hierarchy there is n number of people down in the hierarchy, and similarly, for every person who survives, there is a bunch who doesn't, and these two groups are not independent. So even though the deaths of the 99% are not "used" to the survival of the 1%, we might still say those two things (the survival and the deaths) are not independent relative to each other.
      Another thought can be that "the means" that are supposed to be morally wrong in scenario non-random 2 are not the very deaths of the 99%, but the fact that an unfair hierarchy is the criterion for survival. In the example of killing people over 50, the killing is the bad means, and that's easy to see. In non-random 2 it is not the deaths of the 99%, but the fact that a few could secure survival given their privilege.
      About your second question, you might be right. I sympathize with Dan's suggestion (thank you, Dan!)

      Delete
    3. Well, sure: given a specified population, one person's spot in the 1% who survive will come at the expense of someone else being there, so it's a zero-sum deal. But that doesn't mean that it's wrong to be there (after all, a full 1% has to be in the 1%), just that it matters how one winds up there.

      That's just to agree with Dan that certain versions of non-random 2 are worse than others. For example, it would be wrong if those in the 1% did unfair things to get there and take the places of those who would otherwise be there.

      But if inequalities aren't the result of anyone's conscious, nefarious design - if they're simply the result of countless ordinary, and perfectly permissible, decisions billions of dispersed people make about what to do - then it's much less obvious that anything needs to be rectified.

      Delete
    4. I see how difficult it is to pin down the locus of wrongness when we talk about inequality. Even if inequalities are not the result of anyone's conscious design (some of them are, though), it is still wrong. Even if inequalities are the result of small decisions of dispersed people (e.g. buying your daily food and clothes from industries that oppress their workers; consume what you don't need), they are wrong. A different, very interesting question is how to allocate responsibility, and what that responsibility would demand from us.

      Delete
  3. Saray,

    Interesting scenarios; in addition to Kyle's questions, I'd like to raise the following issues:

    a. In support of (ii), you point to social inequality. But that's not all there is to it. For example, who's more likely to survive a global nuclear war: a rich person who works on wall street and lives in Manhattan, or a member of a tribe that lives in the Amazon rain forest?
    It seems to me the latter is far more likely to survive, because New York and all major American cities will be targeted and very likely destroyed, whereas probably no one will fire their nukes at the Amazon rain forest (at most, they might target cities in Brazil, Peru, etc., but the forest?)
    That would make non-random 2 more probable than random 2, but not along the lines of social power, or more precisely, not entirely along those lines. In some social contexts, social power would increase survival chances. But on the other hand, some of the poorest and least powerful people in the world would be much more likely to survive than some of the richest and most powerful.

    b. Assuming that your point about social inequality holds (but because of 1., I don't know that it does), I'm not sure why that particular kind of non-random 2 involves achieving good ends by morally wrong means. Who would be acting wrongly? If a person is in a position to save herself and her family thanks to having better info, is it morally wrong on her part to try to survive?
    If not, I'm not sure who would be behaving immorally.

    c. The case of actively killing everyone over 50 may be morally relevantly different because of an intent to kill, rather than an intent to save - though I'm still not sure who would be the people acting immorally in non-random 2.

    ReplyDelete
    Replies
    1. Angra, thank you for your comments.
      about your comment a: your concern seems fair for the example you mention (someone in a possibly target city who happens to be wealthy vs. someone in a in-principle-not-target city). I'm still convinced that survival in non-random 2 would be along the lines of social power. Let's twist your example and take two people living in New York, the wealthy person working in wall street and someone living in one of the poor neighborhoods in Queens. Even if we accept that New York is more likely to be hit/affected in the event of a massive catastrophic event, the chances of survival won't likely be equally distributed across the area.
      So maybe your point is a specific one about geographic location in the event of a war: whether or not you are located in a target area will be critical for your survival. But still, within the target area, social power can be the criterion.

      about your comment b: I agree that maybe there is no one acting wrongly by actively trying to save themselves and their friends. I was not assuming that we need survivors to act wrongly in order for non-random 2 to be a morally wrong scenario. Maybe every single survivor in non-random 2 even tries to do some morally right thing by trying to save those who were not as lucky as them. That is still compatible with non-random 2 being morally wrong, given that there is a small group of people whose social privilege (with or without morally bad intentions in their heads) is securing them survival, while non-privileged people cannot achieve it.

      Delete
    2. Saray,

      I guess social power might make a difference in places like New York, though I'm not sure of that: for example, if total war were to break out this year (it's not going to happen, but hypothetically), it's difficult for me to see how the person working on Wall Street would be safer than the person living or working in Queens. There are no shelters for the rich right now, underground or in space. I do think some people in positions of military or civilian leadership would be more likely to survive because there are plans to keep a country running even in case of massive war, but I don't see why the wall street worker will be in a better situation than the poor person in Queens, in terms of survival chances.
      Also, I'm not sure how that could change in the near future, either. For example, if the US developes lasers that can shoot down all enemy nukes, that's going to protect Queens as much as any other part of New York. And not many people seem to be building anti-nuclear shelters that could save the powerful.

      That said, some people in America are in fact using their money to increase their survival chances: survivalists who move to rural areas, stockpile all sorts of stuff, learn to hunt and gather, etc. As I see it, they seriously overestimate the chances of a global catastrophe. If they made a better assessment, they wouldn't be survivalists. But as it happens, if there were a global war, overall they would be in a better position to survive than nearly everyone else in America. But I see that as their choice (a bad one, given the available info, but still their choice).

      In nearly all other countries, there are no nuclear shelters, either, and information about a war would probably be shared almost in real time through social media, so I'm not sure how wealth and power would help, either.

      At any rate, granting that locally, wealth and power would make a difference, globally it seems to me it probably wouldn't, for several reasons, like:

      1. People who live in rural areas would be much more likely to survive than people who live in urban areas. That increases chances of survival in some less developed countries.
      2. Population centers in the richest countries are mouch more likely to be targeted than population centers in poor countries with little to no strategic military capabilities. In particular, major population centers in the US and other rich countries would be destroyed, but even some major population centers in poor countries might survive.
      3. Hunter-gatherers are already used to live off the land; even if many of their usual prey animals die out, they're in a better position to adapt and hunt whatever is left and takes over than people who live in richer countries and have no idea as to how to hunt.

      Still, given that your concern is about local inequalities, maybe something could be done about that, though I'm not sure what. Do you have any specific policies in mind that might reduce inequalities in survival chances? Or do you think that general policies to reduce economic inequalities would also reduce chances of survival in case of nuclear war? (there is still the issue of survivalists; they and their families would be in a better position than those in a similar economic situation).

      Regarding your second point, I understand moral wrongness as a property of actions, not of situations, so we have a different view on that. Still, I could agree that a situation could be very bad, even if no one acts wrongly.

      Delete
  4. Saray,

    Great jumping off point for discussion!
    First, I agree with Kyle’s point about the surviving population of non-random 2. If the survivors avoided extinction through unfair means (including knowing participation in some institutional harm), then I think that may be relevant to the value we assign to their surviving extinction. If the survivors avoided extinction through no fault of their own, then I don’t see a difference between random 2 and non-random 2. There may be a difference (they’re rich!), but not a moral difference.

    Second, even if the survivors survived through some fault of their own, I think any moral difference between non-random 2 and random 2 is much smaller than you would think. Here’s why. What’s valuable about the surviving population of non-random 2 and random 2 is not only their own survival, but the survival of the species for future generations. Indeed, the value of the current population is a drop in the bucket when compared to the value of future generations of humans. As you said, we are concerned about these lives and the art they create, the technology they design, the ideas they generate, and so forth, multiplied by the number of generations that humans continue to survive until the next extinction level event wipes out the whole world.

    An extinction level event, such as nuclear warfare (maybe not so much with GMO’s), not only would destroy most of the earth’s population, it also would fundamentally change the lives of those who survive. The survivors are in a bunker, but their mansions and yachts are gone, their technology and information are gone, the banking system is gone, etc. If an extinction level event has wiped out 99% of the population, whatever the survivors had in terms of resources is mostly gone (except for what they brought with them in the bunker, but without the security of existing political systems). The material resources that underwrote their unfair advantage is gone.

    It seems we need to take into account what an extinction level event would do not only to the human population, but also to our social, economic, and political systems. We also need to take into account what the lives of future generations would look like in this drastically different world. Future generations are the main reason for valuing Parfit’s 2 over 3.

    One more point. I think your main worry is with present social, economic, and political inequalities. It seems this worry is very different than the worry about the extinction of our species. Yours is a worry about the present; the latter is a worry mostly about future generations. Yours is a worry about present systems that perpetuate inequalities; the latter is a worry about human lives. Even if certain human lives were lived poorly (in morally deficient ways), I think the lives themselves are valuable. I guess I still would be more concerned about 3 than non-random 2.

    ReplyDelete
  5. Saray, thanks for this interesting post. Here are a few thoughts.

    (1) Confession: I find this issue difficult to think clearly about, partly because I keep flipping back and forth between considerations of self-interest and morality. I relate to the perpetuation of the human species more as a matter of self-interest (though I realize there is something odd about attributing interests to the entire species) than morality. I agree that, say, one person releasing a biological agent that destroys all of humanity is doing a morally bad thing, but it is not because the world is a morally worse place now that humans are all gone. I experience that sort of judgment as a category mistake. Which is not to say that you are in any way making that mistake. It is just that I feel generally confused about what sort of judgment I am making when I evaluate these outcomes.

    (2) You respond to Angra that you think of moral wrongness as a property of actions rather than situations, which is fine. (That means I guess that you agree with me that there is no real sense to be made of the claim that a universe with humans is morally better than a universe without them). But this means that the set of possible futures that conform to your (2) can be divided into (a) those that occur because humans acted in such a way as to make this result more probable (e.g., anticipating the catastrophe and preferentially protecting the rich and powerful) and (b) those that occur "naturally” (e.g., a cataclysm occurs that kills everyone who isn’t on a luxury liner.) Of course, we can agree that it is a regrettable feature of our world that only mostly privileged people get to take trips on luxury liners, but that is something we should work on independent of any considerations of apocalypse. It just isn’t clear to me that your intuitions are supported in this second sort of scenario. I would choose one of these rather than (3) every time.

    (3) Do you think there is something fundamentally wrong with the idea of preserving some non random selection of the human race? If, e.g., I was given a choice between preserving a random selection and preserving those that tended to be more cooperative and less inclined to violence, I might choose the latter.

    (4) Do you think that focusing just as much on preventing the survival of a selected group carries with it the danger of coming to regard some cataclysm or other as a fait accompli? With climate change, e.g., some argue against devoting resources to technological solutions that might actually stop or reverse it, saying we should just accept that it is coming and devote our resources to minimizing its impact on those most threatened by it.

    ReplyDelete
    Replies
    1. Hi Randolph,

      I'd like to clarify that I said I think of moral wrongness as a property of actions rather than situations (Saray attributed moral wrongness to situations, it seems to me).
      As to whether I think a universe with humans is not morally better than one without them is another issues, I'm undecided as to whether moral badness - rather than moral wrongness - is a property of situations.
      More precisely, some situations are bad, and some worse than others, and that concept of "bad" seems to have some implications involving moral obligations, etc., and involving what's morally wrong to do (more below). I'm inclined to think it's not the same concept of moral badness that is applied to people (e.g., "a morally bad person"), which is a character trait. Yet, my impression is that it's customary in philosophy to consider that concept of badness (i.e., the one applicable to situations) as a (or the?) concept of moral badness. I'm not sure whether this is because there is an implicit belief that it's the same concept we use when we say someone is a morally bad person (the same concept, but it takes somehow a different form in those cases), or a different though related concept that goes by the same name; i.e., "morally bad".


      An example of a bad situation is a situation in which nearly everyone dies slowly and painfully due to a pandemic is a bad situation. We might then ask whether it's morally wrong to cause that bad situation, to not attempt to prevent it from happening, etc.

      Delete
    2. Angra, thanks for pointing out my misattribution. Your other points are interesting, too.

      Delete
  6. thank you all (Kyle, Dan, Angra, Chong, Randolph) for your comments! So interesting! I don't think I can respond to them without writing a lot. Can I have coffee with all of you and chat about this?

    ReplyDelete
    Replies
    1. Thank you as well for your interesting post and replies!

      I'm afraid I live too far away for coffee, though (I just post here sometimes), but thanks for that too.

      Delete
  7. Replies
    1. Sorry I am a little late to the party here. But I am not sure inequality, by itself, is morally objectionable. The easiest way to show this is to assume a baseline of equality (of pain, or pleasure), and then a welfare change in one person (a bit less pain, or a bit more pleasure) that does not affect the pain or pleasure of others. Viola: inequality, but not of any morally objectionable sort, right? So perhaps it's not inequality per se that matters, but only relative advantage and disadvantage, dominance and submission, or something like that, right?

      I am also for coffee to chat this out with Saray. And just to show I am not an aristocratic jerk, I will bring the beans...

      Delete