Sunday, December 14, 2014

The Internet Commons

The current ruckus over Internet neutrality has been framed by the mass media in a highly biased way (as might be expected). It is described as a pitched battle between Internet users in general, who want free unrestricted access to the Internet and some major Internet Service Providers (ISPs) who want a proprietary right to give faster access to some of their high-volume commercial users. In short, it is framed simply as a conflict of self-interest between two disparate groups.

This seriously distorts the issue by ignoring the fact that the Internet is a commons. Although the idea of the commons can be traced back to Roman times, it currently gets relatively little attention in the media, in politics and even in contemporary works of political theory and philosophy. As a result, the general public has at best a rather cloudy understanding of the concept. (I include myself in that category.) The lone exceptions might be found in some works on environmental preservation and related utopian thinking about alternative futures. This is regrettable. The commons are actually very valuable attributes of most contemporary societies but in predominantly capitalist countries like ours, they are under constant threat of enclosure by mega-corporations that have undue influence over lawmakers. Internet neutrality is just one such case among many. For a good account of the scope of the threat, see David Bollier’s Silent Theft.

Rather than attempt a careful definition of the commons (I don’t have one), I will focus on two of its central features worth noting. A commons is a highly valued public asset or pattern of behavior that is (a) regarded as of such high value that it is felt by the public that special effort ought to be made to ensure that it can be experienced or practiced by future generations in perpetuity and (b) it cannot be privately owned or controlled without risking deleterious consequences to its value as a common public asset. The commons are, or should be, off-market. Some commons are publicly owned (parks, rivers, lakes, forests), other commons are unowned but their use can be regulated in ways that preserve their value to the public (the human genome, Antarctica, the free electoral process in democracies, basic scientific research).

The Internet is clearly a commons of inestimable value. Thanks to the technology, it is probably the first truly international commons. It has made information on almost every conceivable subject readily accessible to a significant and rapidly growing portion of the world’s population, along with the opportunity to engage in an open-ended electronic conversation about the reliability and significance of that information. The desire of some ISPs to fast-track – at their discretion – some of the more lucrative Internet sites would clearly diminish the value of the Internet commons to all other users. And if such a special permission were granted, it would very likely not be the last.

Moreover, fast-tracking would be profoundly unfair. The ISPs did not create the Internet. It was the work of a research arm of the Defense Department along with the collaboration of some American research universities, mostly taxpayer financed public entities. Nor did the ISPs build or pay for the satellite infrastructure that is the backbone hardware of the Internet. The ISPs just dropped in at the end of the line to make the connections between their microwave towers and individual homes and offices. Fast-tracking would be an outrageous enclosing of an invaluable commons.

One possible objection to this argument for Internet neutrality is that it turns a blind eye to the vast amount of Internet traffic that is far from anything one could reasonably consider a common public good, e.g. the streaming of Hollywood movies, pornography, and the widespread us of the social media for trivial ends. The point can be granted but it does not detract from the fact that a substantial part of the Internet serves the commons. We can all live with the fact that another part of the Internet serves relatively trivial personal ends.

Clifford Anderson
Professor Emeritus
Department of Philosophy
Sacramento State

Sunday, December 7, 2014

Physical time travel

There is good evidence that human time travel has occurred. To explain, let’s first define the term. We mean physical time travel, not travel by wishing or dreaming or sitting still and letting time march on. In any case of physical time travel the traveler’s journey as judged by a correct clock attached to the traveler takes a different amount of time than the journey does as judged by a correct clock of someone who does not take the journey.

The physical possibility of human travel to the future is well accepted, but travel to the past is more controversial, and time travel that changes either the future or the past is generally considered to be impossible.

Our understanding of time travel comes mostly from the implications of Einstein’s general theory of relativity. This theory has never failed any of its many experimental tests, so we trust its implications for human time travel.

Einstein’s theory permits two kinds of future time travel—either by moving at high speed or by taking advantage of the presence of an intense gravitational field. Actually any motion produces time travel (relative to the clocks of those who do not travel), but if you move at extremely high speed, the time travel is more noticeable; you can travel into the future to the year 2,300 on Earth (as measured by clocks fixed to the Earth) while your personal clock measures that merely, let’s say, ten years have elapsed. You can participate in that future, not just view it; you can meet your twin sister’s descendants. But you cannot get back to the twenty-first century on Earth by reversing your velocity. If you get back, it will be via some other way.

It's not that you suddenly jump into the Earth's future of the year 2,300. Instead you have continually been traveling forward in both your personal time and the Earth’s external time, and you could have been continuously observed from Earth’s telescopes during your voyage.

How about travel to the past, the more interesting kind of time travel? This is not allowed by either Newton's physics or Einstein's special relativity, but is allowed by general relativity. In 1949, Kurt Gödel surprised Albert Einstein by discovering that in some unusual worlds that obey the equations of general relativity—but not in our world—you can continually travel forward in your personal time but eventually arrive into your own past.

Unfortunately, even if you can travel to the past in the actual world you cannot do anything that has not already been done, or else there would be a contradiction. In fact, if you do go back, you would already have been back there. For this reason, if you go back in time and try to kill your childhood self, you will fail no matter how hard you try.

While attempting this assassination, you will be in two different bodies at the same time.

Here are some philosophical arguments against past-directed time travel. I suggest that none of these are convincing. The last one is subtle.
1. If past time travel were possible, then you could be in two different bodies at the same time, which is ridiculous. 
2. If you were presently to go back in time, then your present events would cause past events, which violates our concept of causality. 
3. Time travel is impossible because, if it were possible, we should have seen many time travelers by now, but nobody has encountered any time travelers. 
4. If there were time travel, then when time travelers go back and attempt to change history they must always botch their attempts to change anything, and it will appear to anyone watching them at the time as if Nature is conspiring against them. Since observers have never witnessed this apparent conspiracy of Nature, there is no time travel. 
5. Travel to the past is impossible because it allows the gaining of information for free. Here is a possible scenario. Buy a copy of Darwin's book The Origin of Species, which was published in 1859. In the 21st century, enter a time machine with it, go back to 1855 and give the book to Darwin himself. He could have used your copy in order to write his manuscript which he sent off to the publisher. If so, who first came up with the knowledge about evolution? Neither you nor Darwin. Because this scenario contradicts what we know about where knowledge comes from, past-directed time travel isn't really possible. 
6. The philosopher John Earman describes a rocket ship that carries a time machine capable of firing a probe (perhaps a smaller rocket) into its recent past. The ship is programmed to fire the probe at a certain time unless a safety switch is on at that time. Suppose the safety switch is programmed to be turned on if and only if the “return” or “impending arrival” of the probe is detected by a sensing device on the ship. Does the probe get launched? It seems to be launched if and only if it is not launched. However, the argument of Earman’s Paradox depends on the assumptions that the rocket ship does work as intended—that people are able to build the computer program, the probe, the safety switch, and an effective sensing device. Earman himself says all these premises are acceptable and so the only weak point in the reasoning to the paradoxical conclusion is the assumption that travel to the past is physically possible.
I recommend an alternative solution to Earman’s Paradox. Nature conspires to prevent the design of the rocket ship just as it conspires to prevent anyone from building a gun that shoots if and only if it does not shoot. I cannot say what part of the gun is the obstacle, and I cannot say what part of Earman’s rocket ship is the obstacle.

Brad Dowden
Department of Philosophy
Sacramento State



Sunday, November 30, 2014

How I learned to stop worrying and love moral relativism

This is partly autobiographical. I used to spend a lot of time thinking about metaethical issues and arguing for a kind of moral realism that would be incompatible with moral relativism, but I no longer worry so much about those kinds of issues. You probably don’t care about Swan intellectual autobiography, but I’ll outline an argument that gave me a push.

I’ll start with a definition. Many associate moral relativism with the view that no moral directives are justified or true. They then worry that moral relativists must think that any moral directive is as good or plausible as any other and so anything goes. But this is mistaken. Moral relativists think moral directives can be justified, but the justification of moral directives is relative to the beliefs, values or commitments of a group of people. This means that moral directives aren’t objective in the sense of applying regardless of people’s beliefs, values or commitments. Their justification, and the legitimacy of holding people to them, depends on their beliefs, values and commitments.

Here’s the argument:
1. M is a distinctively moral directive only if it provides reasons for acting.
2. R is a genuine reason for action for an agent only if it is capable of motivating that agent.
3. Therefore, M is a moral directive only if it is capable of motivating an agent.
Premise 1 is a statement of internalism about morality and reasons. It is a conceptual claim about the semantics of moral directives: claims of morality are essentially normative in the sense that there’s a necessary connection between them and practical reason. Moral claims employ terms that are evaluative, action-guiding and prescriptive. Of course, many non-moral claims, like judgments of etiquette, aesthetics, and prudence, have this semantic feature, as well. Some philosophers have exploited this fact in an attempt undermine morality/reasons internalism. But morality is supposed to be distinctive from etiquette, aesthetics, and prudence in the categoricity or authoritativeness of the claim. Something that’s a genuine M has ‘practical clout’ or ‘oomph’, such that someone who said ‘I know it’s M, but I don’t really care about M’ is making some kind of mistake. A moral directive adverts to reasons or considerations that cannot be legitimately shrugged off in this facile way.

This view about the connection between the requirements of practical reason and the requirements of morality is a species of moral rationalism. Moral rationalists usually say that the requirements of morality are practically decisive, but I only say that claims of morality purport to provide reasons for action that have greater deliberative significance or oomph than other practical directives. This version of the thesis avoids, on the one hand, the suggestion that morality has absolute weight in practical deliberations, and, on the other hand, the implication that morality is merely a system of hypothetical imperatives.

Premise 2 is a statement of existence internalism about the connection between reasons and motives. It is very often associated with Neo-Humean theories of motivation. Some consideration provides an agent with a genuine normative reason for action only if it is capable of playing a motivational role in her deliberations. This is just what it means for some consideration to be a reason for the agent. This statement of internalism, then, is a thesis about what has to be true in order for a reason statement truly to apply to an agent. It must connect up with things the agent cares about, or which are deliberatively accessible. This means that externalists about reasons and motivation are wrong to think that an agent has a reason for action when the proffered considerations are deliberatively inaccessible. An agent cannot sensibly be said to possess a reason that is deliberatively inaccessible to her, and so it cannot be a reason for her. The externalist could say, “Tuff. We’ll still apply the reason statement to her,” even if the proffered considerations are deliberatively inaccessible from her point of view. But that makes it sound less like the kind of thing that we should see as an authoritative directive and more like the kind of thing that would be an authoritarian directive.

The conclusion in 3 is a thesis about how to identify the contours of genuine moral directives. Understood this way, premise 2 is presenting answers to the questions raised by premise 1. Moral directives essentially claim that some agent has a significantly weighty reason to act (or avoid acting) in some way. They purport to direct others authoritatively. What, if anything, could possibly justify these claims? Premise 2 suggests the response that moral claims directed to an agent are appropriately justified in terms of considerations that are reasons for her. It follows, then, that moral directives have authoritative normativity when (and because) they are grounded in her rational and evaluative commitments; the authority that grounds these claims is her own.

In other words, the connection between normative reasons and motivation (in 2), and so the connection between morality and motivation (in 3), is read in such a way that an appropriate connection to motivation is what makes it the case that R is a genuine reason for action and M is a genuine moral directive. This is moral relativism. The most obvious objection is that moral relativism just gets the contours of morality badly wrong. Doing the moral thing cannot be so easy – like shooting an arrow, drawing a circle around where it lands and calling it a ‘bulls-eye.'

As I said, I’m not so worried anymore about this implication, but I can say more in the comments. What’s attractive to me about relativism is that it secures the distinctive authoritative status of moral judgments without authoritarian bossing around.

Kyle Swan
Department of Philosophy
Sacramento State

Sunday, November 23, 2014

Suicide and love: Do actions speak louder than words?

The spark for this post came from an offhand sentence in a recent student paper—and the paper wasn’t even on suicide:
“To Brittany Maynard, the cost of treatment afflicting her with a bald head, 1st degree burns, morphine-resistant pain and suffer from cognitive abilities outweighed the benefit of spending the last six months with the people she loved in the condition she is currently in."
If you do not yet know about Brittany Maynard’s publicly pre-announced suicide—her now-famous editorial “My Right to Die with Dignity at 29” came out only last month—she moved from California to Oregon for one reason and one reason only: to legally obtain a prescription of lethal drugs from an Oregon doctor.

When I Googled Brittany’s name today (November 21), the first listed link was a piece on Cosmopolitan.com (!) promoting a posthumous Compassion & Choices video released yesterday (November 20) which uses Brittany’s tragic circumstances to again promote its own political goals.

There are many things worth discussing about Brittany’s tragic situation and decision, the political goals of C & C, and the relations between them. But in this short post I’d like to make just one small point that relates to something unsettling I’ve noticed about what Brittany constantly stressed in her editorial and her videos: the love between her and family and friends.

I’m reluctant to even attempt making this point. But here goes:
My decision to take my own life for prudential reasons—reasons referring to the anticipated benefits and burdens of continuing to live—necessarily makes a certain kind of statement—not merely about the value of my life to me, but the value of others’ lives to me.
(This point is different than the oft-heard claim that my decision to take my life because of a condition I have—like a brain disease—necessarily expresses a certain kind of statement about the value of other people’s lives with that condition.)

One can make this point strongly, and in ways that sound harsh. But one can make it in softer ways as well.

Here’s a strong statement of the point by G. K. Chesterton over a century ago (1902):
“Grave moderns told us that we must not even say “poor fellow,” of a man who had blown his brains out, since he was an enviable person, and had only blown them out because of their exceptional excellence. Mr. William Archer even suggested that in the golden age there would be penny-in-the-slot machines, by which a man could kill himself for a penny. In all this I found myself utterly hostile to many who called themselves liberal and humane. Not only is suicide a sin, it is the sin. It is the ultimate and absolute evil, the refusal to take an interest in existence; the refusal to take the oath of loyalty to life. The man who kills a man, kills a man. The man who kills himself, kills all men; as far as he is concerned he wipes out the world. His act is worse (symbolically considered) than any rape or dynamite outrage. For it destroys all buildings: it insults all women. The thief is satisfied with diamonds; but the suicide is not: that is his crime. He cannot be bribed, even by the blazing stones of the Celestial City. The thief compliments the things he steals, if not the owner of them. But the suicide insults everything on earth by not stealing it. He defiles every flower by refusing to live for its sake. There is not a tiny creature in the cosmos at whom his death is not a sneer. When a man hangs himself on a tree, the leaves might fall off in anger and the birds fly away in fury: for each has received a personal affront. Of course there may be pathetic emotional excuses for the act. There often are for rape, and there almost always are for dynamite. But if it comes to clear ideas and the intelligent meaning of things, then there is much more rational and philosophic truth in the burial at the cross-roads and the stake driven through the body, than in Mr. Archer's suicidal automatic machines…”
Here’s a softer, narrower, more cautious statement of the point (from me):
Whether or not you think it morally wrong or a sin, my choice to end my life for prudential reasons is not only a commentary on the value I place on my life, but is also a commentary on the value I place on the potential and actual contents of my life—including the people I love.
Of course, not all suicides are equal in this way, or even made for prudential reasons in the first place: Socrates is not Saul, Romeo is not Robin Williams, and Juliet is not Judas.

But so-called ‘rational’ suicides—in particular, those in which a mentally competent adult decides to take her own life because she fears the anticipated blessings of continuing to live will be outweighed by the anticipated burdens—cannot but send a jarring message to loved ones.

The message? “My life is no longer worth living.” Translated? “You are no longer worth me living for.”

When I choose to end my life on purpose for prudential reasons, even though my words to those surrounding me may be “I love you,” my actions are, at the same time, saying, “I would rather die than spend more time with you.”

I think this interpretation of my act is correct even factoring in my fear of pain and losing control.

Brittany Maynard and those like her are sometimes treated primarily as victims—of disease, or C & C, or both. It’s human nature to pity victims and to try to comfort them, reassure them, give them what they want, and avoid causing them to feel guilt or shame.

But perhaps treating them this way risks morally infantilizing them.

Why not treat them like moral adults, and show them how their actions speak like words?

Sunday, November 16, 2014

Getting along with moral disagreement


Thanksgiving is around the corner for Americans. It’s a time for food, family, and thanks, but also arguments about politics. This used to be tolerable to some, but now the country feels so polarized that many find it too exhausting to engage in discussion with those who have different views.

This is just one way in which the unhealthy divide in this country manifests itself. Even if there is a sense in which we have always been so divided, it’s toxic. And one side is not more to blame than the other. Liberals think conservatives are certifiably insane and vice versa. Neither camp seems willing to locate or concede any common ground. The government shutdown we dealt with this time last year is one consequence, thanks to our pathetic “do nothing” congress. Obama and McConnell have recently vowed to get along better, but we can only hope these aren’t empty promises from Washington.

Regardless of how the new congress behaves, we should do better. Often we cast our moral and political opponents as evil and unreasonable, rarely making a serious effort to understand why anyone would think differently. As psychologist Jonathan Haidt says, most everyone is motivated to do what they think is right. Liberals, for example, err when they “understand conservatives as motivated only by greed and racism.”


Haidt’s own research suggests that both ends of the political spectrum tap into moral ideas that fit with human nature, resting on evolved moral intuitions generated by our “righteous minds.” Perhaps surprisingly for an academic, he suggests that if anyone it’s liberals who discount certain values, such as in-group loyalty, purity, and authority, which a flourishing society cannot completely abandon. Liberals care about only harm and fairness, so they can’t fathom why anyone would place great moral weight on, say, loyalty to one’s own country.



Haidt and others think this is the key to getting along better. However, even if Haidt is right about how liberals and conservatives think about morality, understanding moral disagreements as grounded in fundamentally different moral values would presumably make disagreements more entrenched. As Jesse Prinz points out, this would make political debate across parties “a bit of a charade,” as one’s opponents must be viewed as either morally bankrupt or ignorant. But we shouldn’t underestimate the common moral ground we have and how often the sticking points are non-moral facts.

Consider abortion, still one of the most contentious issues in America. Liberals and conservatives tend to agree that we shouldn’t sacrifice a person’s life for mere convenience. Much of the disagreement is about when a fetus becomes a full-blown person, with basic inalienable rights. Conservatives tend to believe that life begins at conception. Many liberals instead say that, while a human organism may begin at conception, a person does not exist until at least the organism can suffer. If the abortion debate hinges greatly on when personhood begins, then this is not primarily a dispute about fundamental moral values.

Other moral debates likewise seem to boil down to non-moral facts. Liberals and conservatives can both value social stability, for example, but disagree about whether same-sex marriage is likely to erode it. The great opposition to human cloning is often due to misunderstanding the science (e.g. cloned individuals are not mindless servants or even exact copies of people, but rather very much like twins). Often people just have different beliefs about the likelihood of various threats (e.g. climate change, mass shootings, government take-over, terrorist attacks) and about the best ways to avoid harm (e.g. whether capital punishment deters crime). Other disputes stem from different religious beliefs about the nature and origin of the universe (consider the debate over creationism in schools) or about whether a practice (e.g. divorce, contraception, homosexuality, premarital sex) is unacceptable to one’s deity.

Conservatives and liberals might seem to exhibit fundamentally different values concerning social programs. But often these disputes turn on disagreements about the severity of lingering racism or innate differences between the sexes. These beliefs effect, for example, the heated debates about the killings of unarmed black men (in Ferguson and elsewhere), welfare, and equal pay for equal work. Conservatives care about fairness and people’s wellbeing, but the opposition to special treatment for certain groups often seems driven by the non-moral belief that racism is no longer a substantial problem in America. (This is precisely what led the Supreme Court last year to overturn a core part of the Voting Rights Act.)

The point is this. If moral disputes often turn on disagreements about non-moral facts, then we are in a better position to get along. First, with common moral ground, we can focus on the arguably more tractable issues. Second, even for those numerous disagreements that will undoubtedly remain, a proper perspective on one’s opponents can aid in compromise. No longer believing the opposing side is evil or insane should make one much more willing to debate the issue in a civilized manner and concede certain points, even without agreement on all the non-moral facts of the case. Compromise, not agreement, is often the more attainable goal, especially given our stubborn nature. (Congress, take note.)

Perhaps the primary prescription is humility. Whether we should increase taxes on the rich or whether disadvantaged groups are entitled to social services, it all depends a great deal on complicated historical, sociological, and economic facts that are difficult to settle. Deferring to experts is often required, then, even if that means the moral verdict may be influenced by our best science, for example.

So this Thanksgiving remember that, just like you, your fellow citizens on the other side of the issue also want to do what’s right. And by and large they value life, liberty, fairness, happiness, family, kindness, respect, honesty, property rights, desert, justice, just like you. More than half the battle is figuring out how these values apply to the complex issues of our day. We should expect in advance that reasonable people may disagree.


Josh May
Department of Philosophy
University of Alabama at Birmingham

Sunday, November 9, 2014

Measurement: Do we take it or make it?

In May we took a perspective on measurement theory in the philosophy of science. The current snapshot of measurement theory is that measurement is representational.

The key aspects of representation through measurement are:
  • Measurement tells us what things look like (from a specified vantage point), rather than what they are like. 
  • Measurement involves selective perspectival input. 
Van Fraassen uses the analogy of visual perspective to illustrate (1) and (2). Think of measurement as taking a vantage point on some phenomenon. For example, when we measure evolutionary processes, the scientist decides the perspective (e.g. the view from the gene, epigene, individual, population, niche, etc.).

This view, which I will refer to as the ‘perspectival view,' is informative for analyzing scientific practice. It accounts for the imperfections and limitations of our representational activities while grounding a certain kind of objectivity. The phenomena are presumed to remain stable, even if our theories and practices may vary in successfully representing those phenomena. This is the basis for the appearance-reality distinction discussed in ‘Explanation and Illusion’. However, the perspectival view of measurement places too much focus on the outcome of passive representation, and not enough on the process of measurement.

Van Fraassen is wrong to use perspectival art as an analogy to measurement. Taking a perspective is a passive activity. Much of measurement is not passive. It is messy in terms of the type of interaction that takes place. And, a theory of measurement should account for this interactivity.

Here’s a simple example to bring our attention to measurement interactivity:

How do you measure the boiling point of water? You stick a thermometer in the sample, and, get a reading. This seems simple and representational: the value on the thermometer represents the quantity of temperature. But this perspectival story leaves out the interaction involved in the measurement process.

According to Hasok Chang (2004), the history of standardizing fixed points in thermometry is a history of “manufacturing” fixed points. The point at which water boils depends on the material conditions within our measurement set-ups. Initially it was discovered that boiling point varies with differences in atmospheric pressure (2004, 15). Additionally, the presence of dissolved air in water produced ebulliation-like phenomena at 101.9 degrees C (2004, 19). However, purged water (water without dissolved air) was measured to behave in a phenomenologically similar manner at much higher temperatures (as high as 140 degrees C). According to Chang, scientists began to focus on samples of water without dissolved air (2004, 16-19). Chang presents anecdote about how De Luc walked, slept, ate, etc., for 4 weeks straight all while shaking a tube of water to purge it of the dissolved air. De Luc’s dedication to manipulating the conditions of measurement serves as a good illustration of the care with which the measurement interactions have to be chosen in order to have a stable, reproducible phenomenon. It also illustrates how sensitive the phenomenon is to the interaction between conditions of the measurement set-up.

Now, for the difficult philosophical question: Is this type of measurement representational, or, is it productive? In each of the measurement set-ups, the boiling point is taking shape with the measurement conditions. In other words, the set-up provides the conditions for the production (and re-production) of the phenomenon. This type of language doesn’t have to “sound” quantum mechanic-y or constructivist. We do not have to discuss a pot of water boiling in the forest. In fact, we need not say anything about pre-measurement values and post-measurement results. All we have to focus on is that the interaction of the conditions for measurement matter to the production of the phenomenon. In simple terms, change the conditions, change the phenomenon. Whether you choose to remain a representationalist or a productivist, one thing we have to consider is that much of the measurement process occurs within the measurement set-up and execution. The final representational step, the measurement outcome, is a small slice of the process. A robust theory of measurement should account for the interactions in this process.

While we’re doing some revision, let’s try out a more adequate art analogy for measurement—one that focuses on interaction rather than passive perspective. The art process of Jackson Pollock is a good starting point.

Pollock numbered his paintings so that people would look at them without searching for representational elements in the names of his paintings (Karmel and Varnedoe 1999). For Pollock, the work of art is not a representation of a phenomenon (1999, 68-69). Rather, it is the phenomenon, which is produced by the interactions that take place in the painting set-up (1999, 99). Pollock was resistant to representation in art. He was also resistant to the view that artists should paint things “out in nature”: “When asked whether he painted from nature, Pollock replied: “I am nature”” (1999, 253). Pollock’s painting set-up and the interaction that occurred within this set-up can be summarized as follows: First, paint was carefully selected to have the proper viscosity. Pollock used gloss enamel paint rather than oil-based paint. The paint was sometimes diluted to have little textural effect, and at other times thickened. He used sticks, worn out brushes, and basting devices that looked like giant fountain pens. 

Pollock also used raw, unstretched canvas in order to be able to perform full-body painting (1999, 72). The painting resulted from the interaction that took place within the painting set-up. Moreover, it is fair to say that it is difficult to appreciate the work of art without looking at this process of interaction. As part of the painting set-up, Pollock is interacting with other elements of the set-up to produce the phenomenon. In interviews Pollock describes being “in” his painting when making it (1999, 17). This analogy puts emphasis on the messy interaction that occurs in measurement. It is, however, important to note that the analogy is incomplete. In measurement, we want repeatable, reproducible phenomena. In painting, we want unique, authentic productions. I welcome all of our fellow DR-ers to manufacture a better analogy.


Work Cited:

Chang, Hasok. (2004). Inventing Temperature. Oxford: Oxford University Press.

Karmel, Pepe, & Varnedoe, K. (1999). Jackson Pollock: interviews, articles, and reviews. New York:Museum of Modern Art : Distributed by H.N. Abrams. 
van Fraassen, B. C. (2009). Scientific representation: Paradoxes of perspective. Oxford: Oxford University Press.

Vadim Keyser
Department of Philosophy
Sacramento State

Monday, November 3, 2014

Deadly Cows and Living Inside of Whales

Humans aren’t good at estimating probabilities. My sense for how likely something is usually arises from a kind of intuitive hunch about it that comes from the comfort level of the idea. If it feels cognitively uncomfortable, like if someone says, “Eggplants usually grow to weigh more than 20 lbs.,” I’m skeptical. But if it feels familiar or comfortable in my head, like, “Justin Beiber actually can’t play any musical instruments,” I’m inclined to accept it.

Behavioral economists like Daniel Kahneman and Amos Tversky have identified a bias that lives here. It’s called Availability Bias. Formally, the idea is that we are prone to mistake the ease with which something is called to mind, or the subjective comfort we have with an idea, for its objective probability. In one famous experiment, subjects were asked to estimate whether there are more English words that begin with “r” or have “r” as the third letter. Since calling words to mind that start with “r” is easy, most people answer that those are more common. But being able to readily recall words that start with the letter is a quirky feature of the human brain. We’re good at that, just like we’re pretty good at memorizing 7 digit strings of numbers like phone numbers, but 13 digit strings are much harder. Our brains are not well-equipped to do a systematic search of words with “r” in the third position, but we mistake that subjective difficulty as representative of their real frequency in the world.

The mistake comes out in lots of places. During Shark Week on the Discovery Channel, people are more afraid of shark attacks and estimate their likelihood as higher. Cows, it turns out, are the real menace to society. You are more than 20 times more likely to be killed by cows than by a shark. People think that zombies and vampires are more plausible now than they did several decades ago when they weren’t such a large part of pop culture. We all drive more conscientiously for the weeks after someone we know has been in a serious car wreck, and we watch our salt intake more closely when a relative has recently had a heart attack.



Now I’m going to suggest something that may offend. Why is it that so many people who are otherwise quite reasonable, and who would never believe similar claims out of context, claim to actually believe outlandish stories like Jonah and the Whale, the Genesis Creation story, the claim that Noah lived to be 900 years old, the story of Joseph Smith being visited by an angel who gave him the book of Mormon, the story of Mohammed being visited by angel who gave him the Koran, and Paul having a seizure and hearing the voice of God? If comparable stories were offered with different, unfamiliar details, I think most Americans would be skeptical at the very least. In fact, if you’re the typical mainstream American Christian, you are skeptical about the Mohammed story and the Joseph Smith story. But tens of millions, perhaps hundreds of millions of Americans take these claims to be true. I take it as pretty obvious that they are not. And I think it’s availability bias that has artificially elevated the plausibility of them in people’s minds. How did these stories become so available to so many people? When people hear such stories over and over, hundreds or thousands of times through childhood and into adulthood, and when the stories are treated as momentous, or even historical, it has the same effect on our probability judgments as Shark Week. When we see portrayals, reread the stories, and hear them repeated and treated with reverence thousands or tens of thousands of times, they come to feel familiar, vivid, and, ultimately, probable. The readiness with which the idea can be called to mind gets mistaken for its reality. I don’t think it is too much to suggest that our widespread acceptance of the story of Jesus’ return from the dead can be attributed, at least in part, to availability bias too.

I’m not offering an argument here for why I think these claims are false. I think you’ll already agree with me in the abstract that it is exceedingly unlikely that a person could survive for several days after being eaten by a whale. And humans do not, as far as we know, ever live much past 100 years, even with all of the benefits of modern medicine and health research. And the religious example aside, you’ve never heard a single plausible report of a human actually being biologically dead and then returning to normal living function after three days. You already share my skepticism about other claims like these. In other contexts, you already agree that such claims are outrageous. So why is it that the religious examples don’t feel more cognitively uncomfortable? Availability bias.

Here’s a meta-level way to think about your own disposition to make this mistake (and lots of others). We all possess nervous systems that do a pretty good job of solving some problems. But we know that when we put them in certain environments, or frequently expose them to certain kinds of stimuli, like mythological stories, those exposures skew the system’s capacity to sort true from false, probable from improbable, reasonable from unreasonable. And knowing that we are built that way facilitates our efforts to compensate and correct for the bias. 

Matt McCormick
Department of Philosophy
Sacramento State

Sunday, October 26, 2014

Biased science

Most people think of bias in personal terms, but in science the most pernicious forms of bias are institutional, not personal. They are not, in other words, the result of rogue scientists fudging their findings to support their pet theories. Rather, they are the result of biased processes for publishing scientific findings that are, in fact, perfectly legitimate.

The key point to understand is this: For any scrupulously conducted scientific study or experiment, there is always some chance that its findings are wrong. Reporting bias and publication bias are effectively institutional preferences for selecting the results of just such studies and experiments for publication, while thousands of others that find no such results never see the light of day.  Both forms of bias are rampant in science, and their causes are many.

Social sciences suffer from major reporting bias, because most negative results are not reported. Franco, et al. (2014) conclude that out of 221 survey-based experiments funded by the National Science Foundation from 2002 to 2012, two-thirds of those with results that did not support a tested hypothesis were not even submitted for publication. Strong results were 60% more likely to be submitted and 40% more likely to be published than null results.  Only 20% of those with null results ever appeared in print. (See graphic here.)

It is not much better in clinical studies. Reporting bias leads to over-estimating or under-estimating the effect of a drug intervention, and reduces the confidence with which evaluators can accept a test result or make judgments about the significance of such results. For any medicines or medical devices regulated by the FDA, posting at least basic results to ClinicalTrials.gov is mandatory within one year of study completion, but compliance is low. A 2013 estimate puts the failure to publish or post basic results at over 50%. No study results get reported at all in 78% of 171 unpublished but registered studies completed before 2009.

Reporting bias infects the evidence evaluation process for randomized controlled trials (RCTs), the basic experimental design for testing scientific hypotheses. That RCTs have limits is well-known. Each requires a large number of diverse participants to achieve statistical significance. Often the random assignment of participants or sufficient blinding of subjects and investigators is not feasible, and lots of hypotheses cannot be tested due to ethical concerns. For instance, sham or ineffective treatments given to seriously suffering patients harm those who might otherwise benefit. We also shouldn’t do an antisocial behavior RCT in a simulated prison environment and to get accurate data. When RCTs are ethical and well-designed, the critical opinions of experts is crucial, since a risk of bias is always present.

Peer-review assesses the value of RCTs, but the effectiveness of this process is compromised when relevant data are missing.Without effective peer-review we consumers of science and its applications have no coherent reason to believe what scientists tell us about the value of medical interventions or the danger of environmental hazards. 

Not sharing, publishing, or making accessible negative results has numerous bad consequences. Judgments based on incomplete and unreliable evidence harms us. We probably accept many inaccurate scientific conclusions. Ioannidis (2005), for example, contends that reporting bias results in most published research findings being false.

Reporting bias harms participants in studies who are exposed to unnecessary risks. Society fails to benefit from the inclusion of relevant RCTs with negative results in peer-review evaluations. Researchers waste time and money testing hypotheses that have already been shown to be false or dubious. Retesting drug treatments already observed to be ineffective, or no more effective than a placebo squanders resources. Our scientific knowledge base lacks defeaters that would otherwise undercut flawed evidence and false beliefs about the value of a drug. RCTs and the peer-review process are designed to detect these but fail due to selective reporting.

RCT designs are based on prior research findings. When publishers, corporate sponsors, and scientists are unaware of previous negative results and prefer positive to negative results, many hypotheses with questionable results worthy of further testing are overlooked. Since all trials do not have an equal chance of being reported, datasets skew positive (erroneously) and this affects which hypotheses scientists choose to examine, accept, or reject.

Mostly positive results in the public record make the effect of a drug with small or false positive effects appear stronger than it actually is, which in turn misleads stakeholders (patients, physicians, researchers, regulators, sponsors) who must make decisions about resources and treatments on the basis of evidence which is neither the best nor available. Studies of studies (meta-analyses) reveal this phenomenon with popular, widely prescribed antiviral and antidepressant medications. Ben Goldacre tells a disturbing story about the CDC, pharmaceutical companies and antivirals.


A meta-analysis uses statistical methods to summarize the results of multiple independent studies. RCTs must be statistically powerful enough to reject the null hypothesis, i.e., the one researchers try to disprove before accepting an alternative hypothesis. Combining RCTs into a meta-analysis increases the power of a statistical test and resolves controversies arising from conflicting claims about drug effects. In separate meta-analyses from 2008 of antidepressant medications (ADMs) Kirsch,et al., and Turner, et al., find only marginal benefits over placebo treatments. When unpublished trial data get added back to the dataset, the great benefit previously reported in the literature becomes clinically insignificant. This is disturbing news: For all but the most severely depressed patients ADMs don’t work, and they may appear to work in the severely depressed because the placebo stops working, which magnifies the apparent affect of the ADM compared to placebo-controls.

Even when individual scientists behave well, the scientific establishment is guilty of misconduct when it fails to make all findings public.  In order for science to be the self-correcting, truth-seeking process it clams to be, we need access to all the data.

Scott Merlino
Department of Philosophy
Sacramento State

Sunday, October 19, 2014

The happiness of philosophers

Last week I promised to reveal some data to help answer three questions:
1. Are philosophers happy?
2. Are philosophers happier than non-philosophers?
3. Does practicing philosophy make people happier?
Here is a bit of information on the three studies that I am drawing from.

Study 1: Professionals (philosophers vs others).

From 2009-2013, thousands of people from around the world participated in the International Wellbeing Study, a multilingual online survey led by Aaron Jarden and his colleagues. I used international English-language philosophy email lists to encourage philosophers to take the survey. 96 philosophers took the survey in English. For this study, I compared these 96 philosophers with 96 random English-speaking non-philosophers. The philosophers were a broad mix of graduate students and all levels of professor. Essentially, this study compares very experienced philosophers with roughly equivalent non-philosophers.

Study 2: Upper Level Classes (philosophy majors in a philosophy class vs non-philosophy majors in a history class).

In 2013, I conducted a short paper survey on happiness in two very similar undergraduate summer classes: an upper-level history class and an upper-level philosophy class. There were 29 philosophy majors in the philosophy course and 63 non-philosophy majors in the history course. All of the philosophy majors would have completed 2-6 philosophy courses more than the history students. Matt McDonald input the data and helped with the analysis for this study. Essentially, this study compares somewhat experienced philosophers with roughly equivalent non-philosophers.

Study 3: Introductory Ethics Class (philosophy majors vs others).

Earlier in 2013, I also conducted a very similar short paper survey at the very beginning of a large introductory ethics course. 33 of the responding students declared themselves to be philosophy majors, while the remaining 130 reported not being philosophy majors. It is very unlikely that any of these students would have taken more than 1 or 2 philosophy classes prior to this one, although the philosophy majors are likely to have taken 1 or 2 already. The survey consisted of several questions about happiness chosen from the International Wellbeing Study. Essentially, this study compares inexperienced philosophers with roughly equivalent non-philosophers.

Hopefully it is clear that I have collected data that compare philosophers with non-philosophers at three (very rough) stages of the philosophical life-cycle: novice, apprentice, and professional.

Notes

Please note that all the questions were self-report questions with multi-option response scales. For example: “…please select the point on the scale that you feel is most appropriate in describing you.” “In general, I consider myself:” [scale of 1-6, where 1 is labelled “Not a very happy person” and 6 is labelled “A very happy person”]. For this blog entry, all response scales (and responses) were converted to 0-10 scales to make comparisons easier. Finally, ‘FoF’ on the figure means ‘frequency of feeling’. Analyses are preliminary. Not all differences are statistically significant.




1. Are philosophers happy?

Yes. They scored above average on all of the relevant scales. The three groups of philosophers also claimed to be happy 38-44% of the time (on average; when also given corresponding questions about feeling unhappy and neutral). But above average results like this are true of the vast majority of English-speaking Westerners (see here for a detailed PDF report). So, nothing too surprising or interesting so far.

2. Are philosophers happier than non-philosophers?

No. The bottom half of the figure above shows the differences between philosophers and non-philosophers in each group on several measures of happiness. The figure shows that the philosophers in each of the three studies reported being less happy than the non-philosophers. As your eyes travel up the figure, you’ll also notice that the philosophers in each group reported being less satisfied with life, usually having worse self-esteem, and being less optimistic about their future. In fact, the only question that philosophers scored higher on is their reported belief that happiness is something that we “cannot change very munch”. So, philosophers are less happy than non-philosophers, but we don’t yet know if philosophy is the cause of the difference.

3. Does practicing philosophy make people happier?

Because each of the three pairings of philosophers with non-philosophers roughly represents a different life stage of the “homo philosophicus” we can compare the differences at each life stage to see if more experience with philosophy exacerbates the problem. Sure, it would be better to track individual philosophers from cradle to grave, but that study would take a lot of money and time (my whole life or more!). Look back at the figure. In nearly every case, the differences between philosophers and non-philosophers increase as the amount of experience in philosophy increases.

Yikes! Compared to our relevant non-philosophy cohorts, we fall further and further behind in the happiness stakes! Note that novice philosophers are slightly less happy than their non-philosopher counterparts. So, philosophy seems to attract less happy people. But, the longer those novices practice philosophy, the more they fall behind those who do non-philosophical things with their time.

It’s not all bad, though. The ~10% difference in reported satisfaction with life actually decreases as philosophical experience increases. So philosophy might make us less happy, but more satisfied (but only just enough to catch up with farmers, postal workers, and school teachers).

The silver lining

But look again. As philosophical experience increases, the reported importance of happiness decreases. Contra hedonism, we learn that happiness isn’t all that important. Which is lucky, because philosophers also believe less and less (comparatively) that our happiness is the kind of thing we can change. Philosophers are also 10% less optimistic than non-philosophers. Given the approximate ~10% optimism bias most people have (see this meta-analysis), that makes philosophers realistic. Philosophers track truths that are relevant to their lives better than non-philosophers. Indeed, it may be a tacit understanding of this more accurate epistemic position that affords philosophers the smugness to offset the hit to our life satisfaction caused by being less happy! You might say that philosophers have exchanged some happiness for some truth. 

Socrates would approve.

Dan Weijers
Department of Philosophy
Sacramento State

Sunday, October 12, 2014

Does philosophy make us happy?

In Ancient Greece, well-born young men could decide what to do with their lives. Some chose to pursue pleasure via wealth. Others chose to chase power and glory through politics and war. A minority, including the likes of Aristotle, chose to pursue the good life through philosophy. Aristotle joined Plato’s academy. Other young men sought out philosophical training wherever they could find it. Women did not have as much choice. They could mostly be found in the garden of Epicurus.

In the same way that philosophers were the first mathematicians and scientists, they were also the first self-help gurus. Many Ancient Greek philosophers were and are still famous for their explicit application of philosophical insights to human life. Correspondingly, many young men sought out philosophy specifically because they desired guidance on how to live well.

What about these days? Self-help gurus are regarded with considerable suspicion by all but those who they’ve saved from sweating the small stuff or from following 6 easy steps instead of 5. Fortunately, philosophers are no-longer viewed as self-help gurus. Unfortunately, we have the opposite reputation—very honest and earnest, but as uplifting as a deflated balloon. Indeed, the amount of fun enjoyed at dinner parties seems to decrease markedly as soon as philosophers speak. Our general skepticism, liberal but unnecessary use of Latin, and penchant for pointing out the many other potential views on any topic, tends to kill the joy. So, in our contemporary era of extreme specialization, philosophers are rarely sought out for their advice on living well.

But philosophers still do discuss the good life. And we are very smart (right?!). So perhaps people should come to us for advice on how to live well. These speculations, and my love of philosophy, have led me to a few questions:
1. Are philosophers happy?
2. Are philosophers happier than non-philosophers?
3. Does practicing philosophy make people happier?
What is happiness exactly? We could call it: a preponderance of positive over negative feelings and a sense of satisfaction with our lives. This is the preferred definition in happiness studies. Along quantitative hedonistic lines, I prefer to call it simply: a preponderance of good feelings over bad. (I reduced the satisfaction with life to good and bad feelings—getting what I want without any felt reward doesn’t seem to make my life better). Regardless, both of these definitions lead to a further question.
4. What is the role of happiness in the good life?
While the first three questions are relative newcomers for me, this last question has shaped much of my research. Answers to 4 abound in ancient and contemporary philosophy. Interested readers can pursue this question further here. Importantly, most theories of the good life afford a prominent position for happiness in their hierarchies of value. Happiness might share first place with truth and friendship, or perhaps with beauty and virtue. Happiness might not be the ultimate good, but it’s certainly worth inviting to the party.

In next week’s Dance of Reason entry, I will report on 3 studies that involved asking philosophers and non-philosophers about their happiness. The studies cover participants from many nations and philosophers at all levels—from first year student to full professor. I’ll use data from the studies to answer questions 1, 2, and 3 above. But, I know that data is not always as convincing as a good anecdote. So, I’ll share my experience and opinion now.

I love philosophy, but it has not made me any happier. It may have even forcibly popped some comfortable bubbles… bubbles that can’t be re-inflated because the pointy arguments responsible remain stuck in my brain. The only exception to this is meat-eating. Moral philosophy forced me do away with it. Luckily my personal greed finally overcame the argument responsible. That argument is still sharp and pointy, and it’s still stuck in my head. I’ve learned to ignore it most of the time. But, I’d be happier without that occasional guilt.

To the extent that I am happy, it’s because of my optimistic disposition, my ability to selectively focus my attention, and the fact that nothing very bad has ever happened to me.

So, philosophy hasn’t made me happy. But I do appreciate that I have such a meaningful job. Philosophers can affect real change in the minds of people who will go on to impact the lives of thousands or millions of others. But, I don’t suppose that is unique to philosophy. Any kind of teacher could do something similar. And many professions can bring about positive change in the world. Consider also being a parent. Parents have a smaller audience (possible exception of Prof. DeSilvestro), but they have a deeper impact.

But, one anecdote does not make a complete proof. So, I call on all philosophy students, faculty, and philosophically-inclined bystanders to help me out. Please share your stories and views in the comments section. Some readers have dedicated all of their working lives to philosophy. Others are planning on doing so. Still others are planning on not doing so! How do you think your decisions have and will affect your happiness and/or your wellbeing? I’d particularly like to hear from students because most of us faculty have committed to philosophy in a way that would create extreme cognitive dissonance for us if we thought philosophy made us into the miserable grouches we are today!

Dan Weijers
Department of Philosophy
Sacramento State

Sunday, October 5, 2014

What crazy idea in philosophy do you think should be taken more seriously?


Kyle Swan: Open borders

People widely regard open borders as, not just crazy, but an insidious, masochistic attempt to bring down America.

But economists estimate that international barriers to labor mobility waste about an entire earth’s GDP, about $75 trillion, every year. This is mostly due to labor’s place premium: grab, say, a Cambodian construction worker, plop him down in the US, and his earning potential increases 8 times. No one thinks that implementing an open borders regime would be entirely smooth and seamless, but it seems like $75 trillion could offset a bunch of potential problems and disruptions.

Notice also that permitting labor mobility isn’t charity. It isn’t charity to stop preventing people from trying to improve their lives. Not being cruel to others is the least we expect from each other. It isn’t our fault that Cambodians are very, very poor. But it is our fault that we keep them poor by preventing people from offering them jobs that pay 8 times better than ones they have.

This is the point of Michael Huemer’s (Colorado) Starvin’ Marvin thought experiment. Many think that it’s morally worse to harm than to fail to confer a benefit. Even if that’s right, it’s obvious that closing borders is an instance of the former rather than the latter.

It was really, really bad when US law permitted employers to discriminate against women and minorities. But these days US law requires employers to discriminate against people with the bad luck to have been born in poor countries.


Patrick Smith:  There are no ordinary objects

Here’s a properly insane idea: there are no ordinary objects. No rocks, no trees, no books, no Buicks. How on earth could we think this? The notion that there are no ordinary objects that constitute much of our perceptual experience seems to be the result of extreme delusion. But how insane of an idea is it?
Let’s consider three propositions:
(1) There is at least one book here. 
(2) If something is an object (like a book), then it is composed of a finite number of atoms (but many, many of them). 
(3) If something is an object like a book (composed of a finite number of atoms, but many, many of them), then removing one or two atoms will not change the fact that there is at least one book here.
This is an example of an inconsistent triad: three propositions that lead to an inconsistency if we accept all as true. If (3) is true, and you apply (3) over and over, you end up having to say that there are no atoms in the situation and there is also a book. Since this is absurd, something has to give.

Now, (2) looks like science and (3) looks obvious, right? So, to preserve consistency, (1) has
to go. And books are the same sort of thing as rocks, trees, and Buicks. So, there are no ordinary objects.


Brad Dowden:  You can be of two minds

Not figuratively. Literally. And be in two places at once. A crazy idea.

This crazy idea needs to be taken more seriously because it is implied by rejecting the idea that you are an Ego Object thinking inside your brain. That classical “you” is an illusion. The new you is not some brain add-on, as if otherwise you would be a philosophical zombie.

To appreciate the implication, imagine that Derek Parfit invents a teleportation machine in his ontology laboratory. You step into his machine in Oxford, where the machine creates an atom by atom blueprint of your body, then sends this information by radio to Paris. A physical copy is created ten minutes later in the Paris machine using Paris atoms. Meanwhile only a pile of dust leaves the Oxford machine. You now exist in Paris.

But the same information that is sent to Paris is sent to a St. Petersburg machine. Two yous are created in these two machines simultaneously. The two yous start having different thoughts, and soon are no longer identical, but they’d be you. At the moment of creation, you begin being of two minds and being in two places at once. This is double survival after annihilation, not no survival. From the first-person perspective, they’d both know they are you in just the way that you can know that you are you after a night’s sleep without yet opening your eyes and looking into the mirror.


Emrys Westacott: The goal of philosophy is not truth

The pursuit of truth is the form that philosophical activity takes, but truth is not its end or purpose.

We assume that the goal of philosophy is truth because Plato said so, because science pursues truth and we’d like to be respected the way scientists are, and because the Truth is generally assumed to be a good thing–it hangs out with the Good and the Beautiful, “sets you free”, etc.

But the real purpose and value of philosophy is to be a medium through which a culture reflects upon itself. It shares this function with literature, film, and the other arts, as well as the social sciences. Methods vary between and within disciplines, but each discipline contributes to an endless, ongoing conversation about matters concerning humanity: how we conceive of ourselves and our activities, how we relate to nature, how we relate to each other, as well as normative variations on these questions. The great conversation may sometimes produce beneficial practical consequences, but its primary value is that it is enjoyable in itself and deepens the reflective dimension of human existence.

This view of philosophy should be taken more seriously because it might help assuage the anxiety philosophers feel over the fact that they aren’t scientists making well-defined contributions to the store of human knowledge. It’s crazy because even in the act of advancing it one seems to be announcing the discovery of an important truth.


Randy Mayes: False knowledge

The traditional analysis of knowledge as justified, true belief had a great run, but it lost its mojo in the late 20th century. No widely accepted analysis has emerged to take its place. Today there is much skepticism concerning the justification condition, and some concerning the belief condition. Few philosophers, however, dispute the truth condition. "Jane knows X, and X is false" is widely condemned as crazy talk.

There is a familiar sense of the term knowledge, regarding which such statements are incoherent. However, the familiar sense of a term is not of singular interest in philosophical inquiry. Human understanding of the world grows because we permit familiar meanings to change. One important mechanism of conceptual change is naturalization, which typically occurs in the service of scientific inquiry. Roughly, we appropriate a term whose ordinary meaning is informed by our sense of how the world ought to be, and tweak it to be useful in understanding how the world is.

Today, in disciplines like information science and artificial intelligence, researchers study knowledge as a natural phenomenon, not a normative concept. They seek to explain how knowledge is acquired, preserved, transmitted and consumed. From this perspective, it is unwise to stipulate that all knowledge is true because it is an open empirical question whether the presence or absence of truth has explanatory value. Knowledge, is, roughly, usable information. Some of it may be true, much of it clearly is not. Philosophers are crazy not to take this concept of knowledge seriously.


Scott Merlino: Supernaturals are superfluous

Descriptions and explanations do not need actual supernaturals to make sense out of what we observe, feel, think, do or say. This is a rational reason for not believing that angels, demons, faeries, ghost, gods, vampires, werewolves, witches, or zombies exist. Doing philosophy (logic, epistemology, metaphysics, ethics) demonstrates this. Have you got a proof that shows that God is necessary? Well, Norman Malcolm (1960) has an ontological argument that shows the opposite.

What makes this a crazy idea is that most people believe in supernaturals (Harris Poll, 2013). Supernaturals are non-physical, non-mental, non-sensible agents that are unconstrained by spacetime or natural laws. Angels visit, ghosts haunt, devils make us sin, gods make and destroy worlds.

We should take this view more seriously because it blinds us to fundamental incompatibilities between religion and science. They disagree about the necessity of a divine creator: Either an intelligent creator must exist, given evidence, or it is not the case that one must exist, given the same evidence. They disagree about how best to explain, say, amazing diversity and complexity in nature. Theists think one cannot explain it without the god of Abraham, Isaac, and Jacob, but cosmology and evolutionary theory show how one can. Each worldview cultivates contrary attitudes about common-sense, traditional beliefs, and novel assertions. Accept claims based on faith, regardless of evidence, or, believe only those claims grounded in testable evidence. Supernaturals, not being amenable to investigation, could be imaginary and we would not notice any difference.


Tom Pyne: Formal causes are real

It’s crazy to think that after 400 years we should see the revival of formal causes. Nonetheless, I think we will – and should.

Aristotelian natural philosophy appealed to the substantial form of the object and the capacities its substantial form bestows. Thus, a medieval Aristotelian explanation of why a material object continues in motion after it leaves the thrower’s hand involves the reception of an impressed form (impetus) which continued it in motion. In short, an acceleration, whose effects were describable by the Galilean formula ‘f = ma.’

Galileo was consciously anti-Aristotelian. He produced algebraic formulas which tell how much mass or acceleration is required to produce a given force, but offer no account of why.

Philosophers of science drew the wrong lesson: since the great increase in scientific knowledge occurred after the breakdown of the Aristotelian synthesis, that synthesis prevented it. Post hoc, ergo propter hoc. On the contrary, medieval natural philosophers had the resources to produce the scientific revolution – and came close to doing so.

Instead of substantial forms we have ‘laws of nature,‘ and centuries of debate over their force. Since they’re inductive, what confidence do we have about the future? Since they’re idealizations, they’re (strictly speaking) false. What’s with that? Do metaphysical cops enforce the law of gravity?

The capacities and powers substantial forms bestow on their objects are dispositional. In suitable circumstances the disposition is manifested, otherwise not. And if not no ‘law’ is broken. There is no natural necessity to cause problems.


Russell DiSilvestro: First Over third

Your first-person perspective sometimes has more legitimate epistemic authority for you than any combination of third-person perspectives—even in limit cases where it’s you versus your time’s unanimous scientific opinion.

Crazy, ain’t it? After all, some lunatics say as much: “don’t bother me with scientific or other “facts”; I just know I’m right, period.” Perhaps each of us can be gripped by a dogma that we think we know—perhaps via introspection—to be the truth about some matter. This happens in most areas of philosophy, including ethics—think of your strongest (a-)moral intuition—and philosophy of religion—think of your most vivid (anti-)religious experience. Silly subjectivism!

More seriously, sometimes your first-person perspective may rightly trump all third-person ones—combined. Forget what the crowd on the street all says; you—and only you—saw the whole traffic accident unfold from the window above the intersection. Even an equatorial tribesman who’s never seen or heard of frozen water should believe in it when his own lyin’ eyes are staring at an ice cube. (‘Course, same goes for when he hears a third-person report about ice by a trusty Scotsman—even one named David Hume.)

Also, isn’t the third-person edifice of empirical natural science built entirely of…first-person stones? Of scientist observations from individual experiments? Isn’t trying to do such science without first-person perspectives like trying to make bricks without straw (while building a pyramid)? The authority of first- over third- may be that of vine over branches.


Dan Weijers: Quantitative prudential hedonism is our best theory of prudential value

Quantitative prudential hedonism—all and only pleasure is intrinsically good for us (and the opposite for pain), and the value of pleasure or pain felt at any moment is dictated solely by the intensity of the pleasure/pain. 

Some implications: 
a) the good life is a life with many pleasures and few pains; 
b) non-pleasures that seem good for us are only good to the extent that they lead to pleasure for us;  
c) events and experiences that do not increase our pleasure or decrease our pain cannot be good for us; 
d) the source or “quality” of the pleasure does not affect its value, only the amount of pleasure affects value.

Quantitative prudential hedonism is thought to be crazy because another of its implications: connecting to a flawless machine that ensures a constant and intense feeling of pleasure BUT NO OTHER EXPERIENCES AT ALL is the best thing that anyone could do to further their own welfare. To most people, this sounds like crazy talk!

But how are we to know what being attached to such a machine would be like? J.S. Mill would have us believe that we’d be in the best position to judge the comparative value of our life and this machine life if we had experienced both. But no one has experienced a life of constant pleasure. So how do we know it won’t be super great?! And remember, any reason you give, I will try to make irrelevant by reducing them to pleasure and pain.

Sunday, September 28, 2014

In defense of causes

Students in Early Modern Philosophy seem shocked to learn that the scientific revolution beginning with Galileo was in effect an attack on the notion of causality. They assume that scientific explanations are causal explanations. Appeals to causality, however, did not figure in early modern science. Nor (while common in ordinary affairs) do they figure in our contemporary explanatory practices, whether inside science or out.

Scientific laws make no mention of causes, for instance. Galileo’s law for a freely-falling body

d = 1/2gt2 

tells us how far the object has fallen in a given number of seconds. The value for ‘g’ – 32.2 ft/sec2 – cannot be said to ‘cause’ the object to fall. (I propose a way to say that it does.)

In the ‘Deductive-Nomological’ model logical deduction takes the place of causality. You derive the phenomenon in question as a logical conclusion from general laws plus a statement of initial conditions. Indeed, to its advocates it was a virtue that it avoided issues of causality.

Increasingly, scientific explanations take the form of statistical correlations, leaving the question of causality entirely aside. The belief seems to be that, once you grasp the patterns of statistical variation, you have access to everything that it is possible – or necessary – to know.

So, are causal relations real features of the world or are they not? If they are, then explanations omitting them are spurious.

On the other hand, if they are not real, some difficult philosophical questions arise. A formulation of a law of nature is logically contingent. So if we take it to express a natural necessity then (barring the postulation of a Lawgiver) there will be no explanation why the particular law in question is the law. On the other hand if we accept a statistical correlation as the explaining formula, then the fact of the correlation itself cries out for explanation.

To be fair, three important considerations made denying a place for causality in explanation seem a reasonable thing to do – one historical, one epistemic, the third conceptual.

First, causes formed a central feature in Aristotelian natural philosophy. It is easier now to see that the apparent incompatibility between Aristotelian and Early Modern forms of explanation arose from features of a particular historical situation; it isn’t logical or metaphysical. 16th century Aristotelian natural philosophy was routinized and degenerate, but in the 14th century it was still very much alive and fruitful in results. Natural philosophers were mathematicizing Aristotle’s principles of moving bodies. William of Heytesbury, a member of the ‘Mertonian Calculators,’ derived the mean speed theorem usually attributed to Galileo. Jean Buridan improved on Aristotle by postulating that a moving body possessed an impetus. This impetus was proportional to the object’s weight, not identical to it as in Aristotle. It was an enduring property, thus it did not require continued action to maintain it. More importantly, in Buridan’s application of impetus to freely-falling bodies it causes a change the momentum of the body: that is, like Galileo’s factor g, it was an acceleration. To be sure, on Buridan’s account objects with more mass should fall faster. But this was also true in Galileo’s earlier Pisan dynamical theory (1589), which was not a significant improvement on Buridan. It’s now reasonable to claim that the resources necessary to have produced the Scientific Revolution were available to thinkers within the Aristotelian synthesis. Thus it is merely a contingent historical fact that the New Science makes no appeal to causes.

Second, it is reasonable to suppose that, even if there are causal relations, we have no independent cognitive access to them. All we can hope for are empirically discoverable natural laws or statistical correlations. However, it is also reasonable to suppose that we do have such access. The view that we don’t was of course codified in philosophy by Hume and has become one of the deep prejudices in philosophy. Pace Hume, we observe causes quite frequently. As John Searle points out, when a car backfiring makes you jump, you experience the causal relation: you don’t need to experience two backfires to get the connection. Wittgenstein’s advice to philosophers is particularly helpful here: “Don’t think, but look!”

Third, the prevailing debate on causality concerns whether it is a relation between events or states of affairs. However, this is a symptom rather than a cause of the modern avoidance of appeal to the relation. It is a logicizing of the relation, reducing it to a species of entailment. It leaves us unsure about such fundamental issues as whether it is even a temporal relation at all.

Are these considerations a sufficient excuse for continuing to avoid causality? I think not. If there is no fundamental conflict between two fundamental styles of explanation, then causal explanations and the whole panoply of contemporary science can work together. Indeed they should.

The concept of causality I favor would make it not just a relation between events or states of affairs, but between individuals in a number of categories – including events and states of affairs. Abstractly, the properties of an individual N give it causal powers to affect, and to be affected by, other individuals. Those powers would be described dispositionally and functionally. Sometimes N’s causal powers result in effects on individual J and sometimes they don’t, depending on J’s own powers as well as features of the environment. 

So Galileo’s g does ‘cause’ an object to fall. It is a measure of an object’s disposition to accelerate. Accordingly, causes don’t ‘necessitate’. Air resistance could affect the distance the object falls in a given time. Unlike Hume, we could still say that N is being affected, still has the disposition, even though it is not manifesting it. Dispositions and functions can be said to be ‘realized’ by the micro-entities of standard science.

The advantage to this explanatory move is that the temptations toward instrumentalism and eliminativism so common in our present explanatory practices would be much diminished, if they do not vanish entirely.

Who’s with me?

Thomas Pyne
Department of Philosophy 
Sacramento State

Sunday, September 21, 2014

Explanation and illusion


This is a sequel to a post I wrote in  June called "The explanatory reductio." The point there was this: Although an explanation is an attempt to understand an accepted fact, sometimes our inability to provide an explanation becomes a reason for rejecting the 'fact' instead. If you can't explain that beautiful creature in your bed, maybe it's not your bed.  Here I offer the following observation, and provide you with some examples:

Many of the greatest intellectual insights in human history resulted from someone explaining a widely accepted fact, as a grand illusion.

1. The moon illusion

As everyone knows, the moon is larger when it appears on the horizon than when it is high in the night sky. Sometime back in human prehistory ancient people wondered: Why does the moon grow and shrink like that? One evening, while sitting at his fire after a fruitless day of hunting, the famous cave philosopher Og, had an epiphany.

Og say moon not get small. Moon only look  small. Moon like prey, get small when run away.

Og was the first to articulate the appearance/reality distinction, an incredible leap in human understanding. Later astronomers accepted Og's view that the changing size of the moon is illusory, but they rejected his explanation. They believed all celestial bodies move in perfect circles, with the earth at the exact center of their orbits.  Aristotle proposed a new explanation: The atmosphere acts as a lens, magnifiying the moon's appearance. Wrong again, as can be easily shown. Cover the moon with a quarter while it is on the horizon, and cover it again when it is high in the sky. The distance between the coin and your eye is the same. There is still no accepted explanation of this phenomenon, and there probably will not be one until we understand which of several different illusions relating to relative size and distance our brain circuitry is falling for.

2. Planetary retrograde

The powerful idea that heavenly bodies move in perfect spheres around the earth was dogged for centuries by the strange phenomenon of planetary retrograde. Long before Aristotle, astronomers from different cultures had observed that planets would sometimes start moving backwards. Apollonius and, later, Ptolemy explained the retrograde as a real phenomenon. Their theory of epicycles held that the planets did not move in simple circles, but in epicycles like this:





The epicycle is the smaller orbit, and at the bottom of the epicyclic orbit, the planet itself really does move backwards. Genius. And wrong.  One of the great insights embedded in the heliocentric system proposed by Nicholas Copernicus is that retrograde motion is an illusion that occurs when earth's orbit either overtakes, or is overtaken by, the orbits of other planets.

3: Galilean relativity

Aristotle's physics accepted as real a variety of phenomena that turn out to be illusions. One of the most important- the basis of his categorical distinction between the heavens and the earth - is that rest and uniform motion are different states of matter. For Aristotle the natural state of an object is rest; it requires no explanation. For an object to move, and to continue moving, a cause is required. This illusion was finally shattered by Galileo (the most famous early adopter of the Copernican view) who realized that motion is not a property of an object at all; it is just an artifact of the chosen reference frame. Any object that can be described as moving in one reference frame, can be described as motionless in another. Galileo's insight was ultimately codified in Newton's First Law of Motion.

4: Darwin's theory of evolution

What accounts for the design of the universe? As most philosophy students know, William Paley argued that, just as the evident design of a watch suggests the existence of a watchmaker, so the evident design of the universe suggests the existence of a universe maker. Even David Hume, who rejected the designer hypothesis as childish anthropomorphism, admitted that the design of the universe required an explanation- and that he didn't have one. Enter Charles Darwin. In the Origin of Species Darwin provided a theory of the emergence, not of design, but of the illusion of design. The illusion of design could, he proved, result from a process involving competition, reproduction and blind, natural 'selection'. Darwin's insight effectively completed the Copernican revolution, destroying the nearly universal belief that human origins and human capacities are beyond human understanding.

5: Continental drift

What Darwin did to the eternality of species, Alfred Wegener did to the immobility of continents. Until the late 20th century every schoolkid was taught that these gigantic land masses were necessarily immobile. That uncanny fit between the west coast of Africa and the East coast of South America?  Coincidence. The similar fossil distributions on adjacent coastlines? Strategically placed land bridges that have, sadly, vanished without a trace. Wegener's theory of continental drift, which hypothesized an ancient super continent called Pangaea, represented the immobility of the continents as an illusion. They simply moved too slowly for us to notice. Wegener's theory was widely and understandably ridiculed, for he proposed no mechanism. But long after Wegener's death his view was mostly vindicated by the now universally accepted theory of plate tectonics.


These episodes are exemplary, not just because the thinkers were brave or creative enough to challenge orthodoxy, but because they succeeded in developing a model that explains both the illusion and the real underlying phenomenon.  Even most who achieve this are not ultimately successful. Plato agreed with Parmenides that the reality of the physical world is an illusion.  He explained it poetically as a shadow cast by the real world of the Forms. But the shadow was a metaphor drawn from the physical world, and the perfect world of Forms turns out to exist only in our imaginations. There are many other examples of the kind I have given above, historical and contemporary, successes and failures.  Perhaps our dancers will identify some of them.

G. Randolph Mayes
Department of Philosophy
Sacramento State