Sunday, December 15, 2013

What was the best book you read this year?

We asked all the faculty in the Sac State  Philosophy department to name the best book (not necessarily philosophy) they read during 2013, and to say briefly what they liked about it. Here, in no particular order, is what they said.


Russell DiSilvestro recommends Humanity: A Moral History of the 20th Century, by Jonathan Glover

This is a selective ethical study of some of the 20th century's most morally regrettable episodes--found especially in various international conflicts (including wars) and repressive regimes (including communism in Russia and China). It's not fun reading given the first-hand accounts of these horrors. But it's sobering to read, and it's very carefully, thoughtfully, and philosophically written. Glover strives to understand how these episodes were even possible, why the episodes were morally regrettable, and how to prevent such episodes from being repeated. Interestingly, he does all this while still retaining his own skepticism about the moral law (he does not think that morality is objectively binding on us in any traditional sense) and his own sympathy with consequentialism (although he is critical of what he calls the 'hard consequentialism' of many of those he writes about).


Matt McCormick recommends: Caught in the Pulpit: Leaving Belief Behind, by Daniel Dennett and Linda LaScola

Dennett and LaScola conduct in depth interviews with a group of priests, preachers, and clergy who don't believe in God. The insight into these people who are required by their jobs to have certain beliefs, but who have lost them, is just amazing.


Kyle Swan recommends The Idea of Justice, by Amartya Sen

Hey political philosophers, your preoccupation with theories about the nature of perfect justice are at best an interesting bit of autobiography, but irrelevant. At their worst, they lead people to think it's ok to run rough-shod over those with rival conceptions of justice because they must be irrational or bad. Stop it. Re-focus.


Lynne Fox recommends Brilliant Blunders: From Darwin to Einstein, by Mario Livio.

The book is a collection of accounts of brilliant thinkers who have made important contributions to our understanding of the world and who yet made major errors—largely due to being over-committed to their own theory.


Mike Pelletti  recommends  The Social Conquest of Earth, by Edward Wilson

I was overly confident that Wilson must be wrong in his thesis defending group selection. The arguments of Hamilton (“The Genetic Evolution of Social Behavior”) and Dawkins (The Selfish Gene) had convinced me that the gene, not the group, and obviously not the individual for sexually reproducing species, was the proper unit of selection upon which natural selection acts. Chapter by chapter Wilson convinced me that group selection is a viable model to understand how evolution works and that it provides insights into tendencies missed by gene selection.


Randy Mayes recommends  Benediction, by Kent Haruf

This is the last novel of a trilogy about the residents of a fictional rural Colorado town called Holt. The books can be read independently, and in any order and they are all superb. Haruf is one of the most sensitive contemporary writers about the lives of ordinary people that I have ever encountered. This novel is about an older man who has been diagnosed with lung cancer and has only a month or two to live. He is the owner of a local hardware store who has lived a good life for the most part, but who also has some profound regrets. It doesn't sound like a fun book to read during the holidays, but it is strangely uplifting and it contains some of the most beautifully wrought chapters I have read in a very long time.


Tom Pyne recommends The Waning of Materialism, edited by Robert Koons and George Bealer

Essays by the likes of Lawrence BonJour, E.J. Lowe, Tyler Burge, Timothy O’Connor, Terry Horgan, generally arguing that physicalism regarding the mental is a thesis in philosophy, not science. Thus it is not forced on us by the acceptance of scientific developments, prominently in neuroscience. Reductivist and eliminativist programs are given an unsparing audit, and caught cheating. Some essays, like Burge’s, propose alternatives to physicalism. Others, like O’Connor show that Kim’s supervenience proposals rely on questionable assumptions.The introduction by Koons and Bealer is, all by itself, worth the cost of the book.


Brad Dowden recommends 1493: Uncovering the New World Columbus Created,  by Charles Mann

A new history of how European settlements in the post-Colombian Americas (i.e., post 1493) shaped the world. I didn’t realize that in 1493 the Americas were so densely populated or that the Amazon river jungles were not a hodge-podge of vegetation but were the result of careful planting, or that in the 1500s so much Spanish silver went directly from the slave mines of South America through their new city of Manila to China, the world’s richest and most powerful country. Before 1493, the Americas had no earthworms, mosquitoes, cockroaches, honeybees, dandelions, African grasses, or rats.


Christina Bellon recommends The World Without Us, by Alan Weisman.

This is a sustained thought experiment (369 pages) in which the author entices us to think about the world minus humanity. It's quite an eye opening examination of our effect on the land, water, air, and other life forms, in both the near and long terms. It's not a very cheery read, but it is well-written and well-researched. The thing I really like about it is that the impacts and effects are really well-researched in their complexity, from engineering and energy, to art and zoology. It's not a shoot from the hip thought experiment, as too many popular counterfactual analyses can be. In the end, you get a real sense for both our fragility and dependence upon our technological reconstruction of our world and on the deeply lasting modifications we have crafted, for good or ill. The book is also interestingly non-judgmental. The author prefers to lay it our for us to think about and draw our own conclusions.


David Denman recommends Late Victorian Holocausts: El Niño Famines and the Making of the Third World, by Mike Davis.

Not one to put you in the holiday mood, but a great book. It's an examination of how Imperial policies turned El Niño droughts into enormous famines (30-60 million deaths in 25 years) in the late 1800s.


Clifford Anderson recommends The New Jim Crow: Mass Incarceration in the Age of Colorblindness, by Michelle Alexander. 

The title refers to the massive and disproportionate incarceration in the U.S. of blacks and Latinos since the "war on drugs" was declared by Nixon and then Reagan. She does a good job of spelling out the several causes of this phenomenon. It is an institutional problem, not simply a problem of racist cops. There is an especially good chapter on a series of outrageous Supreme Court Decisions that have significantly contributed to the problem.


Joshua Carboni recommends: Descartes' Bones: A Skeletal History of the Conflict Between Faith and Reason, by Russell Shorto

While the title might suggest that this book concerns an in depth look at the philosophy of Descartes, it only delves into the particular philosophy of Descartes and his contemporaries on a very basic, introductory level. Russell Shorto comes from a journalist background, and as such, the text reads like an investigative detective story detailing a treasure hunt. However, the treasure in this case is not gold but the precious remains of Rene Descartes. Shorto traces the lines of modernity alongside the fight over – and international travels of - these remains, and in so doing, he illustrates how the thought of Descartes and many of his contemporaries transformed the intellectual sphere in Europe. The most interesting aspect of the book, however, was the story of the bones themselves – their travels, mysterious locations and the various claims to ownership.


Scott Merlino recommends The Time Traveler's Guide to Medieval England: A Handbook for Visitors to the Fourteenth Century, by Ian Mortimer

The past is a foreign country, they do things differently there. British historian and fiction writer presents a travelogue to medieval England. For all you Middle-Earthers and Harry Potter fanatasists, here is a dose of reality without the magic. A brisk, eye-opening read.


Patrick Smith recommends Existential America, by George Cotkin.

A historical examination of existentialist themes in American literature, philosophy, art, and culture stretching back to the late 18th century up through the 1980s. Cotkin presents an engaging and entertaining narrative showing the influence of canonical existentialists like Kierkegaard, Nietzsche, de Beauvoir, Sartre, and Camus in the work of figures like Emily Dickinson, Herman Melville, William James, and Ralph Ellison (among many others). Cotkin's chapter on existential moods in Moby-Dick is especially interesting. Highly recommend.


Sunday, December 8, 2013

How to frame a philosopher

Almost all human beings are susceptible to what psychologists call framing effects. This means that people will reverse their preferences and choices based simply on the way information is presented. For example, a doctor who recommends surgery to a patient based on learning that the surgery has a 90% survival rate may well have cautioned against it if she had learned instead that 1 out of every 10 patients will die. 

In this example the effect is partly due to differential vividness. Information about probabilities and percentages do not typically cause strong emotional reactions in normal people.  But when the doctor reads that 1 out of every 10 patients dies, she vividly imagines the death of her own patient.

Another cause is at least as potent. The first frame focused the doctor’s mind on the potential gain, whereas the second frame emphasized the potential loss.  Human beings appear to be loss averse, which means that it hurts a great deal more to lose something than it feels good to acquire it.

You probably know that most people are naturally risk averse, but loss aversion is an entirely distinct (and slightly more contested) phenomenon. People show their risk aversion when they opt for a guaranteed gain rather than an uncertain one of higher expected value.  For example, most people would choose a guaranteed 100 dollars over an 80% chance of 150. (The expected value of the latter is .8 x 150 = 120.) The insurance industry is entirely dependent on our aversion to risk. Most of us will pay significantly more than the expected value of an insurance policy for a guaranteed outcome of lesser value.

Loss aversion, on the other hand, can actually cause people to become risk seeking. For example, if you've just lost 200 dollars you may be more than normally attracted to an opportunity to bet 50 dollars on a 10% chance to win 250.  This is a bet you'd almost certainly pass over in other contexts, and rightly so since its expected value is a  loss of 20 dollars. What's at work here is our basic inability to simply ignore sunk costs and make decisions strictly on the basis of their value for the future.

Obviously people who are susceptible to framing effects are easily exploited.  What I'm curious about, though, is how susceptible philosophers are to framing. I'd love to believe we are generally less so, but it's an empirical question, and one that would not be too hard to study.  For example, you could take 500 members of the APA and divide them into two groups. Offer one group a discount for early payment of conference fees and inform the other that a penalty will be assessed for late payment. If philosophers were not at all susceptible to framing then we would see roughly the same proportion of early to later payments in each of these situations.

But until an enterprising X-phi doctoral student does the work, we're free to speculate. The obvious argument for thinking philosophers would show some resistance to framing effects is that we're supposed to be pretty good at detecting both logical equivalences and logical and performative inconsistencies. When philosophers wrestle with thought experiments and paradoxes (the Trolley Problem, Qualia Inversion, the Chinese Room, Twin Earth, the Gettier problem, the Raven Paradox) two of our central activities are determining whether descriptions of situations and outcomes are (a) logically equivalent and (b) logically coherent.

On the other hand, there are some features of the philosophical mind that could militate against this happy outcome. The one that stands out most for me is our continuing obsession with certainty. We officially denounced certainty as a criterion of knowledge in the early 20th century, but as a group we still pyne for it. We primarily speak the language of proof and necessity, not evidence and probability. Almost all professional philosophers have taken formal logic at some point in their career, but comparatively few have studied induction in a serious way. This suggests that we might be even more prone than similarly educated people to risk-based preference reversal.

I am also inclined to agree with Justin Smith that contemporary philosophers are not the most curious people in the world.  The X-phi movement may be a harbinger of change, but philosophy still seems to attract a lot of intellectual floogie birds, more interested in the comfort of justification than the thrill of discovery. Mad reasoning skills won't help with framing if your basic instinct is to keep circling until your intuitions are fully fortified. It will do the opposite.

If you are curious about your own sensitivity to framing, consider this example, which bounced off my forehead the first time I read it in Daniel Kahneman's book Thinking, Fast and Slow. The example is from the economist Thomas Schelling, and it shows how our strong moral intuitions can interfere with our ability to think clearly.

Schelling's example is this:  Consider the U.S. federal tax exemption for families with dependent children.  If you are even a slightly liberal-minded person you probably agree that this is a terrible idea:  The rich shall be given a larger exemption than the poor.

Fine, bad idea. But now consider that (just as with our discount vs. penalty example above) the tax code language is arbitrary. We can state an equivalent policy by expressing it as a surcharge that must be paid for each dependent child under a certain number you lack. (If you don't immediately see this, just consider what it is like to be a childless taxpayer not getting the exemption. You are literally being charged for not having children.)

Now consider this proposal for a system of surcharges. Rich people will pay a smaller surcharge than poor people. Well, that's obviously just as terrible an idea.  These are both policies that transparently favor the rich over the poor.  They are just stated differently.

If this point is perfectly obvious to you, then congratulations! You've just been framed.

G. Randolph Mayes
Professor, Department of Philosophy
Sacramento State University

Sunday, December 1, 2013

Holidays and the absurd: it's all about us

by Vadim Keyser

As the mild autumn approaches the soggy Sacramento winter, and the oldies radio stations, one-by-one, become clogged with the holiday spirit, I start feeling a bit uncomfortable. Around about Thanksgiving that feeling becomes more precise. I can tell you what gives me that feeling: Taking a large bite out of sweet potato casserole only to realize that it’s made out of stinking yams; seeing people lined up outside of every store like a bunch of cold turkeys and lined up in-stores to buy 45 million cold turkeys; and, television. I can tell you what that feeling is like, and I can get you to feel that feeling. All I have to do is sigh, or make you watch a Charlie Brown special. Ostension is a powerful tool for directing understanding (shout outs to Dr. DiSilvestro). But, how can I describe this feeling and the cause behind it in analytically precise terms?

Thomas Nagel gives us a working answer in his 1971 article, "The Absurd." In it, he analyzes absurdity as a phenomenon, and tries to answer two questions:
  1. What makes things absurd? 
  2. How can we solve our absurd situation? 
I think we can use Nagel’s account in a very practical way—to answer this question: What is wrong with the holidays?

According to Nagel, most people, on occasion, feel absurd but get the source of the feeling wrong. That’s because we appeal to our intuitions to figure out the cause. You might think that what makes things absurd is the realization that we are tiny in space-time. During the holidays we have a lot of time to think. We might find ourselves taking a walk through an empty city or a cold countryside. We look around (mostly up), and, oftentimes, these thoughts emerge: “How brief is my existence” and “How small I am." I once heard Bill Nye speak at the University of Nevada, Reno.  At one point he flashed a slide where the earth was pictured as a tiny jellybean among other jellybeans. He said, “We are a spec within a spec, within a spec…and that sucks.”

But according to Nagel, brevity in existence and smallness in size do not explain the feeling that life is absurd. Some thought experiments may help. Imagine yourself as immortal. You watch societies and solar systems cycle through creation and destruction. Does this take away the feeling of absurdity? Immortality seems to magnify absurdity. What about size? Imagine humans as large as galaxies (imagine a bunch of gigantic, floating baby humans). This just magnifies the feeling of absurdity as well. According to Nagel, the reason why absurdity doesn’t go away when we add more time and space is because the absurd is not due to some external condition that can be modified.  It is a property of something deeper within the human psyche. We’re like balls of cookie dough with salmonella inside of them. Adding more cookie dough isn’t going to get rid of the salmonella—in fact, the condition worsens and spreads. 

According to Nagel, what makes things absurd is a discrepancy between aspiration and reality. For example, you confess your love to an answering machine (or worse, you accidentally get Liam Neeson's line, and he’s having a very bad day). This seems to parallel Camus’s answer about what makes things absurd. Like, Sisyphus, the working human is pressed up against his/her rock—the cold reality of it, grinds against our warm faces. But Nagel says that the discrepancy isn’t between us and reality. This is because we don’t have much access to what reality really is. Rather, the discrepancy is between us and us. I think this is where Nagel’s description of the absurd becomes fascinating and surprising.
This condition is supplied...by the collision between the seriousness with which we take our lives and the perpetual possibility of regarding everything about which we are serious as arbitrary or open to doubt. [178, my emphasis]
What makes things absurd is a discrepancy between perspectives. On the one hand, taking life seriously is unavoidable. On the other hand, when we step out and take a look at our situations, we’re these organisms that wear other organisms on our feet, and sit on dead trees, and tap plastic keys for extended hours and laugh at a lit box, and put mush into holes that exist on our faces. Why do we do these things? When we take a look at our habits, inclinations, norms, and justifications there is only doubt. Even if we have a divine purpose—the universe becomes a part of us, and we become part of the universe—we still can’t get rid of the annoying nephew within us, constantly asking, “Yeah, but why?” 

Nagel says that all values can only be justified by reference to themselves, and when we try to zoom out to reach a foundation, we only end up in the thin air of doubt—the view from nowhere. But the absurdity is not in the fact that humans can take the nebula’s eye view of themselves. Rather, it’s that once they do, they can go back to their daily activities as though nothing happened. This is why the holidays are a prime time for the absurd process. We reflect, we doubt, and on Monday, we wake up at 6 am to a batch of emails that make no sense, and, at the same time, a lot of sense.

According to Nagel, we can never get rid of the absurd because it’s within us. My bone to pick with Thanksgiving is really something that’s within my bones. So how do we solve our absurd situation? Nagel’s solution is that we accept our self-transcendence and have some fun with it—we use irony to be light about our being. I like this solution just fine. I think this is why Woody Allen’s films are so enjoyable. But I’ve always respected Albert Camus’s solution a bit more. You see, I think Nagel gets Camus all wrong. He says that Camus' Sisyphus is self-pitying and that that his only solution to absurdity is to shake his fist at the world. I beg to differ.

I think Camus is talking about shaking a fist at yourself. For Camus, the rock isn’t the external world; rather, it’s our response to the external world. The rock is an amalgamation of all of those anxiety-producing reactions we have to the world. The doubt that Nagel speaks of can be clumped right in there along with hope, fear, and regret. Sisyphus has a chance at happiness only because he makes a conscious choice to rebel against those aspects of himself. He is stronger than his fate because he can rebel against fear, hope, doubt, and regret. I like this solution because it’s more hands-on.  Camus doesn't just passively accept our fragmented nature.

What’s wrong with the holidays? Whether you like Nagel’s account or Camus’s account, one thing seems clear: The answer is: us. But fret not. This makes the problem easy to solve.

Vadim Keyser
Lecturer, Department of Philosophy
Sacramento State University

Sunday, November 24, 2013

Overriding ignorance and respect for autonomy

by Dylan Popowicz

Last year, I found myself in the minority during a debate on the topic of personal autonomy*. In dispute was the following case:

A woman (let’s call her Mrs. Jones) has been diagnosed with cancer but refuses to believe what her doctor is telling her. A person with cancer, she believes, would not feel as well as she does; a person with cancer, according to her belief, would rapidly lose weight, whilst she has in fact gained a few pounds. According to her own prior experience with cancer (through word of mouth, film, etc.) it isn’t possible that she has cancer—thus, she refuses to take any medicine or follow any course of treatment.

This question followed:

Q1: Assuming that it is certainly true that Mrs. Jones has cancer, but that she doesn’t believe that she does after being proffered the appropriate evidence, would it be right for the doctor to secretly or forcefully administer treatment?

The class’ overwhelming response was that it would be immoral for the doctor to administer any drugs or medication without Mrs. Jones’ permission. The central principle of autonomy, our moral obligation to respect and protect an individual’s autonomy, had won out over any other (benevolence, nonmaleficence, etc.). Even with an important adjustment to the case, the majority opinion would not sway: it made little difference if we could imagine that the cancer could be treated by something as innocuous or unremarkable as a pill. The issue was not pain, or harm of any sort, but autonomy through and through.

It seemed to me that there was a world of difference between the autonomous choice of one holding a skewed relationship with reality, and the autonomous choice of one acting upon reflection of the facts. Of course, any such consideration was answered swiftly: but surely a part of one’s being autonomous is the ability to choose what one believes.

Now, I think that there is a profound philosophical problem at the root of such an untethered conceptualisation of autonomy. It seems peculiar to think that we could choose what we believe, but even if we could, it seems that our range of choice is limited to a certain educational conditioning. A way of teasing this issue out is to change the situation slightly: let us consider that the only reason that Mrs. Jones is denying the doctor’s assertion is due to her distrust of ‘people of colour’, and that the doctor is a black man. A new question could be (and was) asked:

Q2. Assuming that it is certainly true that Mrs. Jones has cancer, but that she doesn’t believe that she does due to the fact that she does not trust the opinion of any non-white person, would it be right for the doctor to secretly or forcefully administer treatment?

Let us assume that there is no other doctor available, no white authority to ask—either Mrs. Jones is treated by the black doctor, or she swiftly and painfully dies. Still, the majority of my class (though a tad fewer) argued that it would be immoral for the doctor to act against Mrs. Jones’ will. Even when I suggested that we certainly knew that Mrs. Jones would accept the treatment if she herself believed that she had cancer, the class still stood opposed to action—the relationship between belief and truth were not nearly as important as the the agent’s ‘choice’.

I’d like to suggest that a more robust conceptualisation of autonomy would take more into consideration than the childish assertion that action x is autonomous because subject y did it because she “wanted to”. I think we need to pay much more attention to how we conceive of agents and their relationship to the world, specifically when it comes to an agent’s epistemic position. An agent chooses based on certain epistemic considerations: we have beliefs or knowledge about the goals of our actions, what we are capable of doing, as well as the present state of affairs, or the point of action itself.

When it comes to these epistemic relationships, it seems difficult to disavow the important role other agents play.

Consider a colour-blind individual, let’s name him Calvin: I’d argue that it is not plausible that Calvin could come to properly understand his own optical deficiencies without the input of a non-colour-blind individual, or what I would like to refer to as an external perspective on his epistemic relationships (a mediating third element). In this situation, the latter individual would have what we could call an epistemic authority. Imagine the (fantastic) situation in which Calvin (who has trouble distinguishing between red and green) has come into contact with a deadly virus, and is sent by a doctor to get the “green” pills from a medical office, only to find that they are located next to a jar of “red” pills. Calvin, using what beliefs he has about the colours “red” and “green”, tragically grabs the wrong pills. Would we consider it a breach of a respect for autonomy if a second individual, let’s call her Claire, who was not colour blind, were to override Calvin’s choice, and make him take the other pills, even against his vehement objections? Remember, it isn’t that Claire is stopping Calvin from reaching a certain end or goal that he desires (the opposite in fact) but rather that she is exercising a certain authority over Calvin’s epistemic position, something that only a perspective from the outside can achieve.

If an autonomous being is taking a course of action towards a certain goal, but unbeknownst to itself, is mistaken in the course of an action’s relationship to the end, surely we can accept a certain sense of paternalism and epistemic authority whilst still leaving sufficient room for autonomy. It isn’t enough to think about an agent as simply ‘acting’ in a vacuum, or to analyse autonomy without making reference to an epistemic context. Ultimately—and I know a lot of readers will resist this on political grounds—I’m inclined to suggest that a full conception of a respect for autonomy would necessitate the need for certain paternalistic acts, based on the epistemic authority of those who know better than we.

* This discussion and debate took place in Prof. DiSilvestro’s Bioethics course in the summer of 2012. My thoughts on the matter are indebted to Dr. D.’s class.

Dylan Popowicz
Senior Undergraduate
Department of Philosophy
Sacramento State University

Monday, November 18, 2013

On the overemphasis of commonalities

by Dorcas Chung

Theology of religions is a branch of theology that attempts to make sense of other religions in the light of one’s own, by seeking a meaningful, authentic, and proper approach of one’s own religion towards others. For example, Joe may take an exclusivist approach to Religion X by emphasizing X’s distinctive and privileged access to Truth. Joe’s aim may be to show how other systems are false and misguided, with the intention of convincing other religious believers to replace their worldviews with Joe’s own. Jane, on the other hand, may view Religion X from the perspective of an inclusivist. While Jane recognizes X’s distinctive claims toward Truth, she does not completely reject the claims of other religions. Other religions may not access Truth with the kind of accuracy and privilege that X does; but they can nevertheless access some important aspects of it.

My interest is to explore yet another approach, called syncretism. Unlike exclusivism and inclusivism, syncretism makes no claim to a religion’s privilege or distinction. For example, as a syncretist Johanna may believe Religion X is only one among many religions with access to Truth. Johanna would thus make no attempt to convert those who belong to other religions, since no sincere religious worldview can be false or misguided. In effect, while other models narrow the scope of privileged access by varying levels, syncretism widens the scope.

Syncretism does seem attractive. Exclusivists can appear to be  imperialistic, dogmatic, and/or disrespectful. Inclusivists, perhaps ironically, turn out to be open to the same charge. The syncretic model, on the other hand, claims to avoid the aforementioned defects by de-emphasizing differences, emphasizing commonalities, and asserting that the commonalities are most essential and therefore most fundamental to any religious outlook. In this way, it appears to treat all faiths, and persons of all faiths, with equal respect and consideration, and to facilitate meaningful dialogue and solidarity to the greatest possible degree.

Syncretists typically argue that, while religions make divergent ideological claims, these claims have similar purposes and produce similar outcomes. Religious stories and concepts should therefore be understood only as meaningful metaphors rather than literal events. This is done to acknowledge the symbolic values and virtues of the metaphor, which can in turn be applied universally to other religious worldviews. There is an expressivist rather than a realist tendency here, in the sense that the syncretist wishes to discuss not what is true but what we all feel to be true. The syncretist believes that divergent religious claims are still taken seriously in this way, since the symbols and metaphors can point to common themes found in various respective religions. The syncretist believes these claims ought not be taken literally, because the differences between religions are actually insignificant and that placing undue emphasis on them can lead to unnecessary conflict.

But syncretism has a serious problem: an inordinate focus on commonalities can actually strip a religious belief system of its most distinctive, essential and robust elements. This is just because common characteristics may not be the most important or most fundamental aspects of a belief system. In referring to a position held by Hendrik Kraemer, Velli-Matti Kärkkäinen writes, “one cannot take individual features out of the context of a particular religion and, on the basis of these individual common features, posit a fundamental sameness of two (or more) religions.”[1] 

The point is easily seen if we think about different ethical systems. Consider the action of helping an old lady  to cross the street – found to be ethical from various theoretical positions. An egoist who believes fundamentally in self-interest may help the old lady because she believes a positive image of herself will help contribute to some future success. A utilitarian may take the same action; but do so because the action contributes to yielding the greatest happiness for the greatest number. A deontologist likewise acts the very same way, but out of a motive of duty for the old lady. Based on this, a moral syncretist might infer that there is a deep commonality between these ethical theories. But clearly it is wrong to claim that egoists, utilitarians, and deontologists all believe basically the same thing. If the foundations are differently rooted, they can and will inevitably lead towards divergent paths.

The syncretist also fails to recognize the importance of evaluating the roots from which symbolic values and virtues flow. She cannot say that a Christian, Hindu, and Daoist are all basically the same because they happen to endorse similar virtues or causes. Buddhism is not very Buddhist if it does not value impermanence and interdependency above all. Christianity is not very Christian if it does not insist that Jesus Christ is the Messiah.  Islam does not seem very Islamic if Muhammad is not necessarily Allah’s Prophet.

A further problem for syncretism is that some observed commonalities are in fact deeply problematic. The oppression of women, for example, is a phenomenon that is unfortunately found in most cultures and religions. Some religious sects even argue that these positions are rooted firmly in their faith.  The syncretist that chooses to disavow repugnant commonalities at least owes us an account of the basis upon which she chooses to do so. The syncretist who simply ignores the embarrassing commonalities is not ...well ...very syncretic.

It's interesting to note in closing that the problems of syncretism are similar to those of  the melting pot and salad bowl metaphors employed in multiculturalism. Opponents to the melting pot analogy argue that engaging with diversity means valuing, in addition to the commonalities, the differences between backgrounds, as well as the distinctiveness of respective backgrounds. A melting pot construal overemphasizes apparent commonalities; but underlying these commonalities are often the implicit predominant worldviews of a society. Thus, the overemphasis on commonalities can actually lead to less meaningful dialogue and undermine attempts at real cross-cultural understanding.


Dorcas Chung
Lecturer
Department of Philosophy
Sacramento State


[1] Veli-Matti Kärkkäinen, An Introduction to the Theology of Religions (Downer’s Grove: IVP Academic, 2003), 183.

Sunday, November 10, 2013

Hostility to genetically modified organisms is lazy and misguided

by Scott Merlino

"Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things." - Douglas Adams, The Salmon of Doubt (2002)

Every week thousands of people protest genetically modified (GM) organisms, and not a few vandalize research sites where GM crops and animals are developed or tested. Many European countries and regions of Asia, Africa, South America, and Australia ban some or all GM products. Greenpeace, for example, has a zero-tolerance stance towards GM. However, co-founder of Greenpeace Patrick Moore now advocates ardently for GM crops for humanitarian reasons: GM remedies for dietary deficiencies save lives.

GM refers to any organism whose genotype has been altered and includes alteration by genetic engineering (GE) and non-genetic engineering methods. GE refers to changes in the genetic constitution of cells resulting from the introduction or elimination of specific genes via molecular biology (i.e., recombinant DNA) techniques. All GE is GM, but some GM is produced by GE and some GM is not.

GM corrects micronutrient deficiencies endemic where rice is a staple food. Vitamin A provides humans with an essential nutrient for vision, growth and reproduction; its deficiency is a public health problem in more than half of all countries, especially in Africa and South-East Asia. The World Health Organization finds that over 250 million people suffer from vitamin A deficiency and over 1 million die each year from it. Diets low in vitamin A produce over 300,000 irreversible cases of blindness annually, mainly in children, half of whom die within a year. Most of these people live in poverty, their diet is mainly a daily ration of rice. Lack of vitamin A also compromises immune system integrity and thus increases the risk of severe illness and even death from such common childhood infections as diarrhea and measles.

Wild rice grains contain a negligible amount of beta-carotene, a key metabolic vitamin A precursor. In the 1990s, molecular biologists Ingo Potrykus and Peter Beyer designed the “Golden Rice” cultivar by inserting two additional genes into the rice's DNA, thereby producing beta-carotene in the grain. The presence of beta-carotene, which makes kernels of corn bright yellow, also makes Golden Rice grains yellow. Beta-carotene derived from Golden Rice converts to vitamin A in humans.

If GM organisms such as Golden Rice can save human lives, then why are so many people upset? What exactly is it about GM rice or GM in general that people oppose? As many see it, GM is (a) unnatural, (b) untested, (c) unsafe, or (d) over-industrializes agriculture. This last concern is important, especially to proponents of sustainable agriculture, but it is not an objection to GM as such, it is an objection to when, how, and to what extent we should use GM cultivated crops. I won't address this issue here, but see this 2001 Economic Impacts of Genetically Modified Crops report. Each of the remaining three objections warrants serious consideration, because they are popular and thus undermine much that such technology offers. What interests me is both how weak each objection is and how little available evidence counts for (and against) each.

Suppose that someone accepts GM for crops such as Golden Rice but not for others. It is difficult, then, to sustain an objection to either GM or GE in general. To be sure, GM is mostly used so far to design into massively cultivated crops traits such as selective herbicide or insect resistance. Objecting to this use of GM or GE amounts to objecting to the specific traits produced, not the method by which such traits were produced. But if one objects to specific GM traits, then GM is not the problem, and we change the subject from whether GM is acceptable to when it is unacceptable. This is another conversation worth having, but it is a different issue. Again, either one objects to GM, in general, or specific GM traits. One need not reject GM, as a process, out of concern for any potential unintended, bad consequences of specific traits that GM (either GE or non-GE) produces. We don’t reject a whole technology simply because we because fear some of it products.

(a) Is GM unnatural? Yes, and so what? As I see it, one cannot oppose GM organisms produced by non-genetic engineering, since this amounts to a rejection of traditional/conventional agriculture, which was invented by our ancestors at least 10,000 years ago who cultivated plants and domesticated animals to suit their needs and wants. Cows, sheep, goats, pigs, sheep, horses, corn, wheat, rice, soybeans, and potatoes have all been genetically modified via selective breeding. We don't reject all or even most human agricultural manipulations of these species, so we don't reject all GM organisms. Of course, all GM is unnatural, but then all artificial selection is unnatural. Civilization depends upon artificial selection. We are living in and dealing with the consequences of human interventions (or expressions) of the natural order already. We innovate, observe consequences, and alter our ways so as to avoid the most demonstrably negative outcomes - this is nothing new.

What about genes moving from one species to another? Non-deliberate gene flow is possible when GM crops are grown in areas where interspecies contact occurs with non-GM crops or weedy species. It already happens in nature in wild populations, and in cultivated crop plants resulting from conventional selective breeding. However, rice species, and species, in general, with their different genotypes, have significant reproductive isolation, which makes them unlikely to hybridize with each other.

To be fair, there is something more specific to which many GM opponents object, namely genetic engineering (GE), which is a kind of GM. So, to call these GM techniques unnatural distinguishes molecular techniques from conventional plant and animal hybrid production methods such as outcrossing, crossbreeding, and inbreeding. GE is essentially biotechnology applied to genes. But we already accept such technologies in medicine. Since the 1990s, gene therapy researchers have been using "genes as medicine" in treatments for cystic fibrosis, diabetes, cancer, and even enhancing musculoskeletal tissue regeneration or inhibiting disease progression in brain disorders, stroke, and traumatic brain injury. Creating novel gene combinations in organisms is not without possible perils but this is a reason for careful design, controlled observations and tests, and above all vigilance. So many unfortunate people stand to benefit from such genetic engineering that it is inhumane and anti-science to block such innovations from fear alone.

(b) Is GM untested? No, even a superficial literature search reveals that GM products and consequences have been and continue to be subject to peer-reviewed, controlled, tests designed to reveal likely hazards to human health and the environment. People voicing this objection need to overcome their intellectual torpor and do their homework on this. I recommend starting with the 2004 National Academy of Sciences "Safety of Genetically Engineered Foods: Approaches to Assessing Unintended Health Effects," (2004). And the most recent 2013 systematic review of tests published in the Critical Review of Biotechnology concludes that “scientific research conducted so far detected no significant hazards directly connected with the use of genetically engineered crops.”

(c) Is GM unsafe? Possibly, but that a process or product is possibly unsafe is a good reason for us to proceed with caution, and never a rational reason to forego research, development and testing, especially when profound improvements in human health and welfare are demonstrable. It is quite difficult to prove that something is safe, especially when people disallow or destroy research facilities. But tests for actual unsafe consequences have been done (see above).

Further, when studies designed specifically to detect adverse effects find no statistically greater risks using GM, opponents overlook or deny these results. In the US, FDA approval requires that each new GM crop be tested. If a new protein (trait) has been added to the genome, the protein must be shown to be neither toxic nor allergenic. The European Union invests more than €300 million in research on the biosafety of GM organisms. After a decade of research its recent 2010 report (p.16) concluded "GMOs are not, per se, more risky than e.g. conventional plant breeding technologies."

Yes, some investigators conclude that some GM organisms are unsafe. But few published studies survive expert scrutiny. One spectacular case worth reviewing fully is the 2011 Seralini study alleging that herbicide-resistant corn caused cancer in rats. Its problematic experimental design and low statistical power provoked this 2012 European Food Safety Authority review.

By the way, one cannot assert consistently that GM is unsafe or dangerous and untested in the same breath, since the only way we may reliably show that any specific GM is a danger or unsafe is by testing under controlled conditions. If there is no such test, then there is no evidence that GM is either safe or unsafe. Speculation, anecdotes, and poorly designed studies that fail peer scrutiny will never satisfy burden of proof requirements even if they satisfy the lazy among us.


Scott Merlino
Senior Lecturer
Department of Philosophy
Sacramento State

Sunday, November 3, 2013

Divine action

by David Corner

It is important to theistic religion that it conceives of God as being active in various ways. God creates the world, and human beings; he is also thought to perform miracles. On the usual picture, everything God does in the world is something that God causes to occur. But God is not normally conceived as being part of the natural world himself. He is supernatural. This means any event in the world that expresses God’s activity is understood to be the effect of a supernatural cause. Let us call this view- the view that God acts by supernaturally causing things to happen in the natural world- supernaturalism.

I have trouble getting a clear grasp on this idea of supernatural causation. I think I have a handle on the notion of a cause generally, though maybe I am fooling myself here. But for example, I think I understand the following claims:
  • Sally caused her car to start by turning the key in the ignition.
  • Sally caused Betty to get a sunburn by leaving the tanning bed on for too long.
  • Sally caused the chandelier to fall by cutting the chain from which it was hanging.
  • Sally caused the vinegar to foam up by putting baking soda in it.
All of these claims have something in common. First, they are all instances of what is called “event causation,” by which one event, a cause, brings about a second event, which is the effect. So for example, the event (1) represented by the key’s turning in the ignition of Sally’s car causes the event (2) of the car’s starting. But also: Sally is a human being, and so she has a body, which is a physical thing. And these kinds of causes fall under familiar principles: There are mechanical causes, electromagnetic causes, chemical causes and so forth. Furthermore, in these cases, the causal relationship exists between physical things that have physical properties, like mass, wavelength, charge, chemical valence, and so on.

But God is usually thought of as being able to cause things to occur without having a body. Furthermore, God is conceived to be all-powerful. So if God exists as theistic religion supposes he does, he could start a car, give Betty a sunburn, make a chandelier fall, or make vinegar foam up. How can he do all these things without a body? Remember, God is not a physical thing, and so he has no physical properties. He has no mass, no charge, no wavelength, no chemical valence.

A supernatural cause has, as far as I can tell, nothing in common with any other sort of cause. Thus it seems to me to be an empty notion. Why call this “causation” when it bears no resemblance at all to any other sort of cause? Suppose I told you that I had a pet bird called an Oolumph. You are interested; what sort of bird is this? You have never heard of such a thing. Does it have wings, feathers? Does it lay eggs? I inform you that it has nothing in common with any other birds with which you are familiar. You would be within your rights to ask me why I call it a bird.

I suppose we could say that supernatural causation is just its own kind of cause. (This would be like suggesting that the Oolumph is just its own kind of bird.) A particularly inviting possibility might be to deny that supernatural causation is a form of event causation. But thinking of God’s activity in causal terms invites confusion. For instance, theistic philosophers suppose that God created human beings- i.e. that he caused them to exist. But then they often suppose that this account of the origin of human beings competes with the account given by evolutionary biology, so that only one of these accounts can be correct. They seem inclined to suppose so because they think the word “cause” is univocal (i.e. means the same thing) in these two sentences:
  • Human beings were created (i.e. were caused to exist) by God, and
  • Human beings came about (i.e. were caused to exist) by evolutionary forces.
It seems to me that taking these two claims to be in competition with one another has been responsible for a great deal of trouble. If I insist that my Oolumph is its own kind of bird, I should not enter it in any bird shows.

The problem of divine agency that emerges here is a familiar one to philosophers. We encounter it in what is known as the problem of mind-body interaction, which arises in regard to a theory of mind known as substance dualism. It appears as though mind and body interact in various ways; the physical event of hitting my thumb with a hammer, for example, causes a mental event to occur, namely the sensation of pain. The mental event of my willing my arm to go up causes the physical event of my arm’s going up- or so the substance dualist supposes. But the substance dualist denies that the mind is a physical thing. How is this sort of causal interaction possible if the mind is not physical? Supernaturalism seems to be a variety of substance dualism.

All this talk of supernatural causation presents a serious problem for the view that God acts in the world- and by extension for theism generally. What to do?

One solution is to abandon theism. Science tells us everything we need to know about what happens in the world; there is no room for God. Or perhaps there is a God, but God never does anything. Obviously neither of these is a very attractive solution to the theist.

I think there is another solution. I don’t have the space to fully explain it here, but perhaps I can give some general indication of the direction that I think theism ought to take. The mistake that supernaturalism makes is in supposing that God can only act by being the cause of events in the natural world. It’s tempting to suppose this is true of agency generally- that anything an agent does, she does by causing something to occur. This is false.

Some time ago, a philosopher by the name of Arthur Danto noticed that some of the things we do are not things we cause to occur. He referred to these as “basic actions.” This notion received quite a bit of discussion from other philosophers, such as Donald Davidson, Alvin Goldman, and Jennifer Hornsby. Davidson refers to these as “primitive actions,” and says on p. 49 of his book, Essays on Action and Events, that “event causality cannot…be used to explain the relation between an agent and a primitive action.” Suppose that I move my fingers, thereby flipping a switch, which causes a light to go on. We would rightly say that I caused the light to go on, and that I caused the switch to be flipped. But I do not cause my fingers to move. I simply move them. This is a basic action on my part.

This means that we can speak of an agent as performing an action without making any references to causes.

We have seen that serious problems are implied by the view that God supernaturally causes anything to happen. But if God exists, we can say that God acts in the world in a basic sort of way. There are things that God “simply does,” in the same way that I move my fingers. These are not things that God causes to occur. I suggest that theists abandon talk of supernatural causes and speak instead in terms of basic divine action.



As a postscript, I regret to report that Arthur Danto died just a few days ago, on October 25th, 2013, and I would like to provide a link to his obituary in the New York Times. Though he wrote on many topics, he was best known for his work in the philosophy of art. I had the pleasure of conversing with him on a couple of very memorable occasions.


David Corner
Senior Lecturer
Department of Philosophy
Sacramento State

Friday, October 25, 2013

Shouldn’t I Agree with David Chalmers?

by Matt DeStefano

David Chalmers and I disagree about issues in philosophy of mind. Given that I believe David Chalmers is an expert, and I am not, shouldn’t I revise my belief to agree with Chalmers? After all, he probably knows better than I do.

A lot of attention in philosophy has been paid to peer disagreement, and how one ought to revise her view in light of knowing that an epistemic peer disagrees with her (an epistemic peer being someone with the same evidence and same reasoning abilities as you). There is an additional question about how we should respond when experts disagree with us.

There are some conditions under which disagreement might serve as a defeater for a position you hold. For instance, imagine that Katie and Andy are watching a horse race. Katie thinks that the horse named Ain’t Misbehavin won, while Andy thinks that Tango Goin Cash has won. (I found these names by Googling “horse names”, it was very entertaining.) Katie knows that Andy's vision is about as good as hers, was watching the race just as closely, and is her “peer” in the relevant sense. They are both equally likely to have made a mistake in declaring a winner. In this case, we might think that Andy’s belief that Tango Goin Cash won serves as a defeater for Katie’s belief that Ain’t Misbehavin won - and vice versa. At the very least, we might encourage Katie and Andy to suspend belief about what horse won (perhaps until the results are revealed by an official).

What about expert opinion? Suppose Katie is watching the race with Susan, an expert analyst, who has a proven track record of correctly guessing winners as they cross the finishing line. Susan has excellent vision, and has been watching horse races for her entire life. If Susan were to say that Tango Goin Cash has won, we might think that Katie ought to revise her view to agree with Susan, since she is an expert. At least, it seems she should be more inclined to revise her belief in this case than in her disagreement with Andy.

There are other cases where it seems we are justified in believing something based on expert opinion alone. For example, I am feeling sick and decide to go to four doctors to get different opinions about my ailment. Three of them tell me I have the flu, while one says I have allergies. Most people would say that I am justified in believing that I have the flu, based on the majority opinion of the medical professionals.

Consider philosophers who are not epistemic peers, such as David Chalmers and myself. He has been studying issues in philosophy of mind for far longer than I have, he is more knowledgeable about the relevant arguments, and more intelligent than I am. When I find out that David Chalmers disagrees with me about some important issue in the philosophy of mind, should I revise my belief according to his expertise? Intuitively, this seems like a rational course of action. However, there are plenty of experts who disagree with Chalmers. In fact, Chalmers’ positions are in some ways minority positions among relevant experts. Considering this, we might think that the proper way to decide what we are justified in believing in is to count the number of philosophers who hold each position, and whichever position the majority of philosophers believe is the one we are justified in holding. This is a truly ridiculous way to determine our beliefs, and we need a better way to understand the influence of expert opinion on our beliefs.

Philosophers have taken a number of different positions about how we ought to treat peer disagreement. Some have argued that one ought to “split the difference” between the two views, and either suspend judgment or come to a middle position. Others have argued that one should “stick to their guns”, and believe what they believe despite disagreement. These views do not easily capture how one ought to respond to expert disagreement.

One view, the Total Evidence View, presents a framework for handling disagreement as a form of evidence. Thomas Kelly has argued that peer disagreement should count as higher-order evidence, and we should revise our beliefs based on our total evidence. Roughly, this position argues that “Rather, what it is reasonable to believe depends on both the original, first-order evidence as well as on the higher-order evidence that is afforded by the fact that one’s peers believe as they do.” (p. 32 of linked paper)

We can extend this “Total Evidence View” to our own disagreement with experts. This is useful in understanding why I might be permitted to believe (and even be justified in believing) my diagnosis on the basis of a majority medical opinion, but not permitted to believe (and certainly not justified in believing) what the majority of philosophers believe. My first-order evidence in the case of my illness are my symptoms, and a very limited idea of what these symptoms might entail. My higher order evidence is that these doctors are much, much better at diagnosing illnesses (this might also be first-order evidence), and that a majority of them seem to agree that I have the flu.

On the other hand, I have much more evidence for my positions in philosophy of mind. While I do not have the same breadth of evidence, or ability to discern what position it supports as Chalmers might have, I have enough to form a reasonably justified opinion about these matters. In this case, expert opinion should not weigh nearly as heavily in determining my beliefs as it did in the diagnostic case. If we consider disagreement as higher-order evidence, we can much better decide how to handle disagreement among both peers and experts. It also eases my personal discomfort about disagreeing with Chalmers.


Matt DeStefano
Sacramento State Philosophy Alumn
&
Philosophy Graduate Student
University of Missouri, St. Louis




Saturday, October 19, 2013

Hate Crime: First Amendment Issue?


Not too long ago I believed that the idea of a 'hate crime' was, on the whole, a bad one. I was mainly persuaded be the following argument:

According to the law, we determine the nature of a crime by what was actually done. When we classify an action as a hate crime, we are punishing the criminal for the very thoughts in his head, or the content of his speech. At the very least, this is a violation of First Amendment rights. At worst, we are legitimizing the Orwellian idea of 'thoughtcrime'.

Upon reflection, however, I realized that this argument misses the point of what I think is the most important reason that some crimes should be classified as hate crimes. When the law is applied to an act to determine its criminality, we already do consider the motivations and thoughts of the actor in the case. For example, if one person causes the death of another, we ask whether the act was purposeful, whether it arose from a moment of extreme provocation or planning, and so on. In other words, intent, what was going on in the actor's mind at the time, is essential for determining the criminal nature of the act.

One of the main reasons for this consideration is how much of a threat the criminal presents to the community. This is why, for example, we consider intentional, deliberate violent crimes to be worse than accidental violent crimes or crimes of passion. A person who kills someone out of anger upon catching him cheating with a romantic partner, for example, is considered far less dangerous to a community than one who plans and then executes a shooting spree in a public place. A person who kills someone after planning the crime ahead of time also presents a larger danger to a community than the 'heat of the moment' killer, since he reveals himself to be capable of killing at least one person even after sustained reflection. While the danger is still mainly confined to a single target, the killer is still a potential threat to the wider community in this sort of case since he might become homicidally angry at someone else.

A person who commits a hate crime also presents a wider danger to the community because her intention to harm dos not have a single target. The target of her hate or anger is an entire class of people, as the evidence of her own expressed intent and beliefs reveal. The harm that she does, or intends to do is likely to be far more widespread.

The way the law determines whether or not a crime is a hate crime is nearly identical to the way it determines whether a homicide is first degree murder, second degree murder, or manslaughter. I think this sort of distinction is necessary and appropriate. Hence, I think that the separate classification of hate crime is appropriate as well.


We just need to be careful, as a society, that we don't become overzealous in applying the term to thoughts and speech alone, or to morally repulsive but relatively harmless actions.


Amy Cools
Sacramento State Philosophy Alumna

Sunday, October 13, 2013

Courageous Ostension: Or How Philosophy Can Ruin a Perfectly Good Joke

by Russell DiSilvestro

Have you heard this joke before?

A philosophy professor determined all student grades for the semester by a two-hour in-class written essay exam. On the day of the test, he strolls in and announces that there will be only one question on the test, and struts to the chalkboard to write three words in big capital letters: “What is courage?”

All students begin writing furiously, knowing that their entire grade hangs on what they can scribble on paper in the next two hours. But one student on the front row stands up, slaps his paper on the professor’s desk, and marches out of the room.

The professor picks up the paper, and on it are just two words: “This is.”

Why is it that we find this joke amusing?

It’s probably not the historical accuracy of the joke. I suspect this is a kind of academic urban legend that never actually happened.

It’s also probably not the wisdom of the joke’s hero, the student—at least not his practical wisdom or street smarts. Granted, some tellings of the story have the happy ending of the student getting an ‘A’ from the professor. But it is risky for a student to pull a stunt like that on a final exam. Like a nearly identical story—in which the relevant question-and-answer are “Why?” and “Why not?”—our original story might well include a safety disclaimer like “don’t try this at home.”

No doubt there is something attractive about the student’s bit of chutzpah: even if he’s not wise, he’s bold, and righteously so. And hence his two-word answer is true. The act is courageous. (While Aristotle would remind us to distinguish courage from recklessness—see previous paragraph—I shall set that to one side.)

But I’d like to focus on something that catches my attention about the story for a moment: the student said a lot just by pointing to something particular.

Philosophers sometimes call this pointing ‘ostension.’

You can look up ‘ostension’, but beware. When I searched ‘ostension’ in the dictionary bundled with my computer the closest word I found was ‘ostensive’ (“adjective; directly or clearly demonstrative; Linguistics-denoting a way of defining by direct demonstration, e.g. by pointing”), and Wiktionary online included among the definitions a theological one (“the showing of the sacrament on the altar so that it may be a receive the adoration of the communicants”).

Of course, the student in the story wasn’t pointing with his finger. He was using scribbled letters to point.

And he wasn’t pointing at someone else, or even at some physical point in space. He was pointing at himself. Or more precisely, at his own words on the sheet. Or, more precisely still, at his act of intentionally writing those words and then submitting them. He was pointing at a pointing. It was a kind of reflexive, self-referential pointing.

How is ostension possible to begin with? I will let my philosophy of language colleagues jump on that one.

How is human ostension similar to, or different from, the sorts of things non-human animals and/or computers do? I will let my philosophy of mind colleagues catch that one.

But I’ve been thinking a bit recently about ostension as a way of making a philosophical “point” (pun intended) in the context of moral concepts. Like courage.

I think we slip into ostension frequently when doing moral philosophy. What’s morally good? “Those are” (pointing to a pair of students helping a classmate pick up a pile of dropped books). What’s morally wrong? “That is” (pointing to a student sleeping at the front of class). And so on.

These ways of pointing to particular things in a moral context are sometimes just ways of getting the discussion going by giving examples. But they are sometimes used as a way of pushing back against a demand for a more precise definition, like Justice Stewart’s famous quote fragment about how some types of obscenity may be hard to define, but “I know it when I see it…”

Socrates would not have been satisfied with the student’s answer. The student gave a particular example of courage. But the professor’s question may have been aimed at getting a general definition of courage—the thing in common in all particular examples of courage.

Perhaps—and this may over-explain things—this is what gives the joke some of its charm. The philosophy professor asked a question that he wanted a Socratic-style definition for; the student gave an answer that, while correct, ignored Socrates. Perhaps ignored Socrates on purpose.

In my own research, I sometimes notice that ostension is used in discussions of a thing’s “moral status.” Here are a few distinct questions that each get at the “moral status” of some thing or other: What things have the sort of moral value that it’s good to have them in the world? What things have interests, and can be harmed by having those interests set back? What things have moral entitlements or rights, like the rights to life and liberty?

And such questions are often answered, at least initially, by pointing, with words or gestures (or both): ‘This thing here.’ ‘That thing there.’ ‘Me’ (pointing to myself). ‘You’ (pointing to you).

Moral status discussions sometimes happen when we are talking about the proper way of treating nonhuman animals, like those used for food (chickens, cows) or medical research (chimpanzees, guinea pigs). And they sometimes happen when we talk about human organisms at different stages of development (infants, fetuses) or states of disease (brain-damaged, comatose).

In this area and others, I think ostension can often be, not just a discussion-starter, but a game-changer. It sometimes functions like “the buck stops here.” And sometimes legitimately so. Example: “Your clever theory entails that there’s no love anywhere? But this here [pointing] is love. So too bad for your theory.”

How can philosophy ruin a perfectly good joke?

This is how.


Russell DiSilvestro
Associate Professor
Department of Philosophy
Sacramento State

Sunday, October 6, 2013

Time passes

by Brad Dowden

The essence of nowness runs like fire along the fuse of time.*

Time passes. It flows. Or so they say. Great philosophers say this.

But they are mistaken.

Fluids flow, but time flows only in a metaphorical sense. This is the sense in which future events somehow move constantly closer to our Now, while past events recede ever farther into the past, just like when a runner who approaches us, then passes us by, and then recedes. We all experience this flow, but only in the sense that we all experience optical illusions.

Physicists sometimes speak of time flowing in another sense. This is the sense in which change is continuous rather than discrete. In this sense, time may flow, but this isn’t the sense of “flow” that philosophers are usually talking about.

Physicists sometimes carelessly speak of time flowing when what they mean is that time has an arrow, a direction from the past to the future. In this sense time definitely does flow, but again this isn’t the sense of “flow” that philosophers are usually talking about.

In the sense of “flow” that too many philosophers do promote, I believe they are confusing time existing with time flowing. Here is why. Things change, so time exists. But that change doesn’t itself change; so, it’s a mistake to say the change “flows.” Let me explain this a little more. Time is what clocks measure. Time is a measure of change that puts dates on events, and tells us how long an event lasts, and says which events happen before which other events. That isn’t the same thing as the flow of time. When things change we say, “Time flows on,” or “Time stops for no one,” but these are inaccurate, poetic remarks. The changes are a sign of time existing, not time changing. When you experience change from eggs to omelets, or change from here to there, you are experiencing time itself, not a passage of time, nor a passage of the passage.

If you can place a date on an event and say it occurred, say, on Tuesday, then that same event doesn’t flow into Wednesday and then on into Thursday. It’s always an event that occurred last Tuesday. So, it is a mis-description of events to say they flow from the present into the past, yet that is what too many philosophers do say.

If time passes, what does it pass? Maybe you want to say it passes us. Hmm. Does it pass our childhood just as fast as it passes us now? Probably it passes at the same rate. OK, let’s assume it does pass at that rate. But what rate would that be? It would have to be a rate of one second per second. But that’s silly. One second divided by one second is the number one. That’s not a coherent rate.

I recommend saying the flow is subjectively real but not objectively real. The mistaken belief that time flows is due to our being misled by careless speech about time (“Time stops for no one”), but it is also due to some objective feature of our brains that makes us “feel” as if time is flowing. I suspect this objective feature is partly our having different perceptions at different times and partly our anticipating experiences before remembering those experiences.

Half the philosophers of time would accept my argument above; the other half believe the flow of time is necessary for “a literal and complete account of Everything That Is So,” to quote from last week’s posting by Professor Pyne. Half of us are mistaken. Which side of this fence are you on–the side that says time is dynamic and flows, or the side that says time is static and doesn’t flow?


Brad Dowden
Professor
Department of Philosophy
Sacramento State

*George Santayana, in The Realms of Being.

Sunday, September 29, 2013

The very idea of Folk Psychology

by Thomas Pyne

Suppose that the phenomena historically associated with demonic possession can be explained as psychotic symptoms.  That would not tempt us in the slightest to adopt reductionist theoretical identities like:
  • Belial  =  Psychotic Condition X
  • Asmodeus = Psychotic Condition Y
Instead we would just say, “There are no demons.”  And eliminate them from our ontology.

What makes a sort of entity A a candidate for an eliminativist program rather than reduction via theoretical identification?  Two conditions:  (i) A does no work in a literal and complete account of Everything That Is So; (ii) A has dubious credentials.  That is, we have reason to suppose that our acceptance of A was based on some cognitive error, or confusion.  An eliminativist program then must account for the error by which we came, mistakenly, to think that there were A’s. 

Contemporary Eliminative Physicalism has been conscientious in its attempt to meet both conditions.  First condition: it claims that attributing mental states is not needed in a literal and complete account of the world. Rather, the place of those attributions will be taken, as Paul Churchland puts it, by the employment of the conceptual scheme of a matured neuroscience.  It’s not that, say, ‘believing’ will be revealed as a brain process; it’s that there is no such thing as believing.  With the science-based conceptual scheme we will be able to talk about what is really going on in the brain instead.   Adoption of the new scheme in place of the old will constitute a “quantum leap in self-apprehension.” (It will accomplish that, of course, only if our ordinary mental attributions really don’t do any work.)

Second condition:  our traditional attribution of mental states is consequent upon a conceptual scheme that embodies a mistaken and inadequate theory.  This conceptual scheme, “Folk Psychology,” is the same kind of error or confusion as invoking demons to explain the voices schizophrenics hear.

The two conditions on an eliminativist program are not independent.  If it turns out that accepting a sort  of entity A is not, after all, a cognitive error or confusion (which is required for meeting the second condition), this weakens our grounds for thinking that A will not figure in a literal and complete account of Everything That Is So.    

This trope of characterizing our mental concepts as ‘Folk Psychology’should be subjected to sterner questioning than it usually is.  In particular we should question the assumption that our ordinary mental concepts form a theory. Just on the face of it, this is an implausible piece of historical revisionism, and I have thought so from the very first time I encountered the phrase.

In any language with which I was familiar the common verbs for cognitive activity are of the same antiquity, and are as much semantic ‘roots,’ as the verbs for other common activities. Liddell & Scott’s  Greek-English Lexicon thoughtfully prints semantic roots in caps. GEN (‘become’:  the root of ordinary Greek epistemic terms, eg. gnosis) BOL (‘desire’ or ‘intend’), PEITHO (‘overcome,’ ‘persuade,’ or in the middle voice ‘believe’) are basic to the language as EDO (‘eat’), PNEO (‘breathe’), BDEO (‘fart’), and BINEO (‘mate’).  In Old English ‘think,’ ‘ween,’ ‘deem’ are “four-letter” words:  as ancient as ‘walk,’ ‘sleep,’ and ‘shit.’ 

The best abductive explanation of this fact is that mental terms, like the other terms, designate common ordinary human functions and actions.  There is no particular distinction made between ‘physical’ actions and functions and ‘mental’ ones.  Eating, sleeping, farting, thinking, believing, and desiring are all just Stuff People Do.

To describe someone as ‘believing that it will rain,” or “wanting to lie down” is not to offer some sophisticated – though mistaken –  explanation of what they’re doing;  it’s simply to describe it.  What is a piece of philosophical sophistication is distinguishing between the ‘mental’ and ‘physical’ in a way that makes such attributions seem conceptually troublesome.  But this philosophical sophistication doesn’t license our reading that distinction back into our ordinary conceptual scheme.

To use an analogy, ‘Zeus’s Spear is an explanatory concept (in ‘Folk Meteorology’) of a more basic phenomenon, lightning.  ‘Lightning,’ however, is not a term of Folk Meteorology: it does not convey an explanation of anything.  It names the phenomenon to be explained.  Likewise, there is no more basic phenomenon that ‘believe’ serves to explain:  it is the phenomenon.  ‘Believe’ is like ‘lightning,’ not like ‘Zeus’s Spear.’

Eliminative physicalism regarding the mental became a popular strategy in the 80’s and 90’s again when it grew increasingly clear that reductive physicalism was never going to work.  But candidates for eliminativist strategies are entities with dubious credentials, our belief in which is based on confusions.  Thinking, desiring, and believing hardly come with dubious credentials.  They are common human functions, among the most obvious and humdrum features of our being in the world.

They are, when you stop to think about it, the least likely candidates imaginable for an eliminativist program. After all, there are no philosophers trying to eliminate ‘shit.’

That's because it makes an indispensable contribution to a literal and complete account of Everything That Is So.

Thomas F. Pyne
Professor
Department of Philosophy
Sacramento State

Sunday, September 22, 2013

In which I compare myself to God

by Kyle Swan

There are still many people, mostly outside the academy, who think that moral and political obligations are tied to divine commands. People should (not) do certain things because God says so. This would mean that God has practical authority over people. He makes it the case that people have obligations by simply issuing a command. Or, what I think would be roughly the same thing, God can create reasons for people to act, reasons they didn’t have before, by simply issuing a command.

For example, the ancient tribes of Israel presumably didn’t have normative reason to avoid eating bbq baby back ribs before God said not to eat them. But, according to this account of divine authority, they acquired such a reason when God declared pork unclean. Moral philosophers often talk about this kind of reason being external because the source of the reason is external to the agent who the claim is directed at, or because the claim is grounded in such a way that the motivational states of mind of that agent are irrelevant. Perhaps many of the ancient Israelites really liked bbq baby back ribs. Too bad.

Here’s another example: if you take a class from me you have to write an assigned paper. Say I assign a paper on Hobbes. You thereby acquire a reason to write a paper on Hobbes. If I instead assign a paper on Rawls, you acquire a reason to write a paper on Rawls. I have practical authority (within this relatively limited domain) over you. Much like God (!) I create a reason for you to act a certain way, a reason you didn’t have before, by simply requiring the assignment. You don’t want to write a paper on Hobbes? Too bad.

Maybe there’s a difference here between God and I. The practical authority I have over my students is contingent on their having signed up for the class. They have voluntarily placed themselves under my (relatively limited) authority. If I assigned a paper on Hobbes to my mail carrier, she wouldn’t thereby acquire any reason at all to write it. But those who review my syllabus, see that there will be paper assignments, and sign up for the class agree to submit to my determinations about the content of those assignments. They presumably do this because taking the class somehow connects up with goals they have or things they care about. So they have internal reason to do it. That seems like an important difference.

I’m not sure these cases really are conceptually different, though. Perhaps God’s authority is similarly contingent, and people’s reasons to comply with his rules similarly grounded in their motivational states. Here’s a section of the narrative where God hands down his law to the ancient Israelites:

Exodus 19:3 Then Moses went up to God, and the LORD called to him from the mountain and said, “This is what you are to say to the descendants of Jacob and what you are to tell the people of Israel: 4 ‘You yourselves have seen what I did to Egypt, and how I carried you on eagles’ wings and brought you to myself. 5 Now if you obey me fully and keep my covenant, then out of all nations you will be my treasured possession. Although the whole earth is mine, 6 you will be for me a kingdom of priests and a holy nation.’ These are the words you are to speak to the Israelites.” 7 So Moses went back and summoned the elders of the people and set before them all the words the LORD had commanded him to speak. 8 The people all responded together, “We will do everything the LORD has said.” So Moses brought their answer back to the LORD.

This looks a lot like a summary of a contract (or covenant). There’s a brief preamble and then promises are made on both sides. The terms are reviewed and accepted and at least appear to be contingent on that acceptance. So suppose the people of Israel in verse 8 had instead said something like ‘Ummm… Thanks for all that, and we really appreciate your offer, but no thanks”? Plausibly, in that case they wouldn’t have had normative reason to comply with all of God’s rules and God wouldn’t have had the standing to demand compliance or to punish them for not complying. The same plausibly goes for surrounding nations that weren’t party to this covenant. The Edomites could eat all the bbq baby back ribs they wanted. It would have been puzzling for the Israelites to demand of the Edomites that they not eat bbq baby back ribs and to hold them accountable if they did. Just as puzzling, perhaps, as me demanding of my mail carrier that she write a paper about Hobbes and holding her accountable when she doesn’t.


I’m not a theologian (though sometimes I try to fake it) and I don’t have too much more to say about the ancient Israelites. But I think the narrative illustrates important things about the social contract tradition, current debates about the nature of practical reason and, perhaps most of all, just how difficult it can be for someone to come to have practical authority over another person. 

Kyle Swan
Assistant Professor
Department of Philosophy
Sacramento State

Monday, September 16, 2013

Ignoring the negative

by Matt McCormick

“Isn’t it weird how celebrity deaths always come in threes?”
“I’m telling you, Asians are bad drivers.” 
“I know it’s not politically correct, but it's true, women just aren’t good at science.”
“I swear I have special dreams.  I dreamt the night before that my mom was going to have a car wreck and she did.”

Confirmation Bias is the mistake of selecting evidence that corroborates a pet hypothesis while ignoring or neglecting evidence that would disprove it. It’s the mother of all fallacies.  And the reason it persists is that it feels so right.  When you’re making the mistake, the conclusion you’re drawing has that shiny, aura of truthiness to it.

Humans are guilty of committing it in a wide range of circumstances. At the end of each semester, many students, including students in my (Prof. McCormick’s) Critical Thinking and Theory of Knowledge where we study confirmation bias extensively, blunder into it. They get a grade for the course that is surprisingly low and send an email to their professor asking to know what happened. As far as they knew, they were doing great in the course. They recall getting an A on an assignment, and doing pretty well on the midterm, and feeling pretty optimistic, so they can’t understand the low grade. Here’s a couple of real emails: 

Student Email 1: I just checked my grades for the Spring semester and was surprised to have earned an F. I completed the major assignments for the course and did well on the midterm (90%) and well on the final (85%). I know I didn't participate in the online forum as much as was required but I'm still confused about the grade. I took the class material seriously and did my best on every assignment assigned.

Professor McCormick (notice the grades highlighted in red here): Here are the grades I have for you. This syllabus gives the details about the grade structure. Check the math and check your returned assignments to make sure it's all right. If there's a clerical error, I'll fix it right away:

Question Sets: 0, 82, 0, 75, 0, 95 (6% each)
First paper: 78
Midterm: 90.5
Second paper: 85
Final: 85
Outside projects: 7/8
Google Group: 0/8
Attendance and participation: 0/8

So between the skipped question sets, the Group discussion and attendance, you gave up 34% of the grade. Even if you were making an A on everything else, that would put it down to a D.

Student 2: I'm emailing you in regards to my final grades. I was hoping you could provide a detailed summary of my grades for the semester so that I can understand how I received a D. I felt as though I did fairly well, particularly improving on the more major assignments, so I would just like to know how I still failed to pass. If you could email me a detailed summary of my grades, I would greatly appreciate it. Thank you.

Professor McCormick: Yeah, I was disappointed in your grades too. It seemed to me that you are capable of doing much better work, and being more responsible about turning stuff in. Here are the grades I have. Check the math with the grade structure on the syllabus and let me know if there is a clerical error asap: 

Question sets: 76, 0, 82, 0, 85, 56, 80 (6% each)
Evil paper 65
Midterm 68.5
Second paper: 75
Final exam: 85
Outside Projects: 6/8
Google Groups: 4/8
Attendance and participation: 6/8

So the skipped question sets took 12% off of your grade. You got a D on the first paper and didn't take the opportunity to rewrite it that I gave the class. You could have brought that up substantially. The Google Group points would have helped too since your overall score came out at 68%.

When we commit confirmation bias, we cherry pick the evidence that suits us. The student actively remembers the good grades, but missed assignments and low scores are forgotten.  Someone picks out the bad Asian driver, or the woman who does poorly at science, and then uses that to fortify their mistake. 

One more example:  over 50% of people think they’ve had prescient dreams or premonitions.  So suppose that you have 20 dreams a night, 365 days a year, for 10 years.  That’s 73,000 dreams. 
Which ones are notable and remembered?  The ones that seemed to have something to do with what happened the next day.  The dream you had that seemed to anticipate your mother’s car wreck leaps out in your memory as an extraordinary coincidence.  In China, there’s a saying, “No coincidence, no story.”  But more importantly, there are 72,998 dreams that weren’t special or notable. 

Clearly, having an accurate and objective grasp of the relevant evidence would serve us well. We don’t want to ignore evidence indicating something negative, disastrous, or dangerous because it doesn’t suit what we want to be true. Imagine if a doctor acquired a skewed view of the evidence concerning a potentially fatal disease this way. Suppose the Secretary of State ignored significant negative indicators in the behavior of an aggressive and hostile foreign country. Suppose a potential employer asked you how you did in your Critical Thinking course in college, and then she checked your transcripts against your distorted memory. Suppose you spend thousands of dollars over the years on losing lottery tickets because the occasional wins stick out in your mind so prominently, while the loses are forgotten.  Suppose you spend time praying to God frequently, hundreds or thousands of times in your life, and on the rare occasion when something vaguely resembling what you prayed for came true, you count that as an answered prayer, while ignoring the thousands of misses.  

Further Reading



Matt McCormick
Professor
Department of Philosophy
Sacramento State

(Note: A version of this piece was published on Matt's own blog Atheism: Proving the Negative on 6.4.13)