Sunday, November 26, 2017

Are you an Oughtist or a Noughtist?

People who have beliefs about the way the universe fundamentally is can be divided into two distinct groups.

The first, and by far largest, is comprised of folks who believe that the universe is organized normatively. Roughly speaking, they believe that the most comprehensive true account of why things happen the way they do will make essential reference to the way things ought to be. Call them Oughtists. The second group is composed of those who deny this. They believe that the most comprehensive true account of why things happen the way they do will tell us the way things fundamentally are, not the way they ought to be. Call these folks Noughtists.

There are different sorts of normativity, the moral sort being the most familiar. Moral Oughtists believe that the universe is organized according to principles of right and wrong. Almost all religious people are moral Oughtists, as are many others who decline to describe themselves as religious but do believe in a moral order: fate, destiny, karma, etc. Traditional religious Oughtism rests on the belief that the universe was created by a supremely good deity. But you'd be no less an Oughtist for believing that it was created by a supremely evil one.

Occidentally speaking, Oughtism can be traced to Plato. Plato developed an account of the universe according to which everything aspires to the form of the Good. Noughtism is most commonly traced to Plato’s most famous student. Aristotle argued that, as Plato’s forms do not belong to this world, they can have no explanatory significance for this world.

But Aristotle was only slightly noughty. He subscribed, e.g., to fundamentally normative principles of motion. In particular he believed that the heavens are a place of perfection and that celestial bodies move uniformly in perfect circles for eternity. They don’t just happen to do this; they do it because this is the most perfect way. Aristotle’s Oughtism persisted for 2000 years, during which time human understanding of the universe increased very little.

It would be handy to say that the death of Oughtism coincided with the birth of science. But Oughtism is not dead, so this is clearly not true. What’s truer is that the birth of science resulted from an increasing inclination on the part of a very small number of very odd ducks to inquire into the world without judging it.

People like Galileo, Kepler and Newton remained Oughtists in the sense that they sincerely believed the universe to be of divine origin. But they took an unprecedentedly noughty turn in ceasing to believe that we could come to know how the universe works by thinking about how a divine being might go about building one. This peculiar mixture of hubris and humility lit the fuse that produced the epistemic explosion that, in a few short centuries, created the modern world.

The story of the growth of scientific understanding is the story of the full retreat of Oughtism. It slinked over the scientific horizon with the general acceptance of Darwin’s theory of evolution. Darwin expressed the vaguely oughty opinion that “there is grandeur in this view of life.” But ordinary folks see it for what it is: a ghastly story of nature “red in tooth and claw,” devoid of any overarching purpose or meaning. Indeed, it is so offensive to our moral intuitions that most moral Oughtists continue to reject it as an account of the true origin of people.

Modern scientists still sometimes speak of their theories in normative terms, especially aesthetic ones. Einstein, e.g., was not religious, but he insisted that “God does not shoot dice,” an oughty expression of his conviction that randomness is too ugly to be an essential feature of the way the world works. But Einstein didn’t arrive at the general theory of relativity by contemplating the nature of Beauty; nor did a single one of the experiments by which it was subsequently confirmed attempt to ascertain whether it is beautiful enough to be true.

So, epistemically speaking, we live in a pretty weird world. We owe it to the expulsion of Oughtism from the playground of science. If this had not occurred, we would all still believe oughty theories of reproduction, disease, poverty, war, social hierarchy, famine and natural disasters. We would still believe in witches and the efficacy of curses. We would know absolutely nothing of galaxies, germs, cells, molecules, atoms, electrons, radiation, radioactivity, mutation, meiosis, or genes. Quotidian items like light bulbs, cameras, watches, automobiles, airplanes, phones, radios, computers, vaccines and antibiotics would not even exist in our imaginations. Yet knowing all of this causes very few to reject Oughtism as a general worldview.

Why is an interesting question, and not one I mean to discuss.

I conclude with the following observation: Most philosophers, even those who believe themselves to be very noughty indeed, are Oughtists at heart. This is because almost all of us, even the most “analytic,” assume that our normative intuitions are a reliable guide to the nature of reality.

There are several reasons for this, but I think the most important one is that philosophers are naturally drawn to features of the world that are normatively non-neutral. This is obvious in the case of intrinsically normative concepts like justice, virtue, responsibility and reason. But it is also true of most other traditional philosophical topics: free will, personal identity, mind, meaning, causation, consciousness, knowledge, thought, intelligence, wisdom, love, life, liberty, autonomy, happiness. All of these carry a positive valence (and their opposites a negative one) that we presume to be essential to them. Hence, we confidently evaluate any proposed theory according to whether it causes us to experience the correct level of (dis)approbation.

This is why, for example, most of us instinctively recoil from theories that propose to reduce phenomena associated with life, mind and spirit to the “merely” physical. They do not have normative implications and therefore do not satisfy the Oughtist need to understand these phenomena as exalted states of being.

G. Randolph Mayes
Department of Philosophy
Sacramento State

Monday, November 13, 2017

It’s Time To Pull The Switch On The Trolley Problem

Back in August Germany became the first nation to institute federal guidelines for self-driving cars. These guidelines include criteria for what to do in the case of an impending accident when split-second decisions have to be made. Built into these criteria are a set of robust moral values, including mandating that self-driving cars will prioritize human lives over the lives of animals or property, and that the cars are not allowed to discriminate between humans on the basis of age, gender, race or disability.

Philosophers have an obvious interest in these sorts of laws and the moral values implicit in them. Yet in spite of the wide range of potentially interesting problems such technology and legislation pose, one perennial topic seems to dominate the discussion both amongst philosophers and the popular press: The Trolley problem. So electrifying has this particular problem become that I suspect I don’t need to rehash the details, but just in case, here is the basic scenario: a run-away trolley is hurtling down the tracks towards five people. You can save the five by throwing a switch diverting the trolley to a side track, where there is only one person. Should you throw the switch, saving the five and sacrificing the one, or should you do nothing, letting the one live and letting the five die?

Initially developed over 60 years ago by Philippa Foot and modulated dozens of times in the intervening decades, the Trolley problem was long a staple of intro to ethics courses, good for kick starting some reflection and conversation on the value of life and the nature of doing vs. allowing. Hence, you could practically feel philosophy departments all over the world jump for joy when they realized this abstract thought experiment finally manifest itself in concrete, practical terms due to the advent of self-driving cars.

This excitement fused with some genuinely fascinating work in the neuroscience of moral decision making. The work of scholars like Josh Green has provided genuine insight into what occurs in our brains when we have to make decisions in trolley-like situations. Out of the marriage of these two developments—along with some midwifery from psychology and economics—the field of ‘trolleyology’ was born. And it is my sincere hope that we can kill this nascent field in its crib.

Why should I, as an ethicist, have such a morbid wish for something that is clearly a boon to my discipline? Because despite the superficial appeal there is really not very much to it as far as a practical problem goes. It is marginally useful for eliciting conflicting (perhaps even logically incompatible) intuitions about how the value of life relates to human action, which is what makes it a useful tool for the aforementioned intro to ethics courses. But the Trolley problem does precious little for illuminating actual moral decision making, regardless of whether you’re in a lecture hall or an fMRI.

To see this, take a brief moment to reflect on your own life. How many times have you ever had to decide between the lives of a few against the lives of the many? For that matter, take ‘life’ out of the equation: how many times have you had to make a singular, binary decision between the significant interests of one person against the similar interests of multiple people? Actual human beings face real-world moral decisions every day, from the food choices we make and the the products we purchase, to the ways we raise our children and how we respond to the needs of strangers. Almost none of these decisions share the forced binary, clear 1-vs.-5 structure of a trolley problem.1

What then of the self-driving car example I opened with? Does this not demonstrate the pragmatic value of fretting over the Trolley problem? Won’t the ‘right’ answer to the Trolley problem be crucial for the moral operation of billions of self-driving cars in the future to come? In short, no. Despite all the press it has gotten, there is no good reason to think the development of self-driving cars requires us to solve the Trolley problem any more than the development of actual trolleys required it almost 200 years ago. Again, check your own experience: how often behind the wheel of a car did you—or for that matter anyone you know, met, or even read about—ever have to decide between veering left and killing one or veering right and killing five? If humans don’t encounter this problem when driving, why presume that machines will?

In fact, there’s very good reason to think self-driving cars will be far less likely to encounter this problem than humans have. Self-driving cars have sensors that are vastly superior to human eyes—they encompass a 360-degree view of the car, never blink, tire or get distracted, and can penetrate some obstacles that are opaque to the human eye. Self-driving cars can also be networked with each other meaning that what one car sees can be relayed to other cars in the area, vastly improving situational awareness. In the rare instances where a blind spot occurs, the self-driving car will be far more cognizant of the limitation and can take precautionary measures much more reliably than a human driver. Moreover, since accidents will be much rarer when humans are no longer behind the wheel, much of the safety apparatus that currently exists in cars can be retooled with a mind to avoiding situations where this kind of fatal trade off occurs.2

Both human beings and autonomous machines face an array of serious, perplexing and difficult moral problems.3 Few of them have the click-bait friendly sex appeal of the trolley problem. It should be the responsibility of philosophers, psychologists, neuroscientists, A.I.-researchers, and journalists to engage the public on how we ought to address those problems. But it is very hard to do that when trolleyology is steering their attention in the wrong direction.

Garret Merriam
Department of Philosophy
Sacramento State

[1] There are noteworthy exceptions, of course. During World War II, Winston Churchill learned of an impending attack on the town of Coventry and he decided not to warn the populace, for fear of tipping the Germans that their Enigma code had been cracked by the British. 176 people died in the bombing, but the tactical value of preserving access to German communications undoubtedly saved many more by helping the Allies to win the war. If you’re like most people, you can be thankful that you never have to make a decision like this one.

[2] For example, much of the weight of the car comes from the steal body necessary to keep the passengers safe in the event of a pileup or a roll-over. As the likelihood of those kinds of accidents become statistically insignificant this weight can be largely removed, lowering the inertia of the car making it easier to stop quickly (and more fuel efficient, to boot), thus avoiding the necessity of trolley-type decisions.

[3] Take, for example, the ethics of autonomous drone warfare. Removing human command and control of drones and replacing it with machine intelligence might vastly reduce collateral damage as well as PTSD in drone pilots. At the same time, however, it even further lowers valuable inhibitions against the use of lethal force, and potentially creates a weapon that oppressive regimes—human controlled or otherwise—might use indiscriminately against civilian populations. Yet a google search for “autonomous military drones” yields a mere 6,410 hits, while “autonomous car” + “trolley problem” yields 53,500.

Monday, November 6, 2017

What famous philosophical argument gets too much love?

This week we asked philosophy faculty the following question:
What famous philosophical argument (observation, distinction, view etc.) is given entirely too much attention or credit? Why?
Here's what they said:


Matt McCormick: Searle's Chinese room

Does a computer program that correctly answers thoughtful questions about a story actually understand it?

In Searle’s thought experiment, a human, playing the part of a CPU, uses the computer code equivalent of instructions for answering questions about a story in Mandarin. The human doesn't know Mandarin, but through the instructions in the code, can, by hypothesis, answer questions as if she understands the story.

Searle maintains that when we imagine ourselves in this position it is intuitively obvious that we don't understand the story in Mandarin. He concludes that this shows that machines accurately modeled by this process (i.e., Turing Machines) don't think or understand.

The thought experiment capitalizes on gross oversimplifications, misdirection, and a subtle equivocation. Several implicit assumptions are false once we draw them out:
  • My armchair imaginings about this caricatured scenario accurately capture what a sophisticated artificial neural net computer is doing. 
  • My intuitions about what I would and wouldn't understand in this imaginary scenario are reliable indicators of the truth in reality; 
  • People are reliable judges of when they do and don't understand; 
  • If I was playing the role of a dumber part of a larger, smarter system, I would be apprised of whether or not the system itself understands.
Once we unpack what would comprise such a system, particularly with modern artificial neural networks trained with machine learning, then we realize how cartoonish Searle’s story is, and the intuition that these machines cannot understand evaporates.


Randy Mayes: The Euthyphro dilemma

The original form of this dilemma concerns piety, but in today’s ethics classes the word “good’ is usually inserted for “pious,” and it is reformulated for monotheistic sensibilities: Is something good because God commands it, or does God command it because it is good?

If we choose the first horn, we must allow that it would be good to eat our children, assuming God willed us to do so. Choose the second and we admit that goodness is a standard to which God himself defers.

Almost always the lesson drawn is that morality is (a) objective and (b) something whose nature we may discover through rational inquiry, regardless of our religious beliefs. Which is just what traditional moral philosophy assumes and does. Hurrah!

It’s a lovely piece of sophistry.

Socrates has created a false dilemma that also begs the question against his opponent. Euthyphro has complied with Socrates' request for a definition. A definition of P is a set of necessary and sufficient conditions, Q, for P. If correct, it is neither the case that P because Q or that Q because P. This question only makes sense if P and Q are simply presumed to to be different.

The truth: it is fine to define the good as what a morally perfect being commands (or wills.) However, it provides no insight into the content of such commands. It provides no reason to believe that such a being exists or that we could recognize it or know its will if it did.


Tom Pyne: Determinism

Determinism is the source of much mischief in philosophy.

Thus Determinism:

For every event there is a cause such that, given that cause, no other event could have occurred.

The mischief stems from its early modern formulation. Peter van Inwagen’s is representative:
  • P0 = a proposition giving a complete state description of the universe at any time in the past.
  • L = all the laws of nature.
  • p = a proposition stating some event that occurs (Electron e’s passing through the left slit; Pyne’s walking home by way of D Street on November 6, 2017)
  • N = the operator ‘it is a natural necessity that’
Determinism is:
If P0 and L, then Np
It is impossible for e not to pass through the left slit.

It is impossible for Pyne to go home by F Street instead.

Now Determinism is true.

It’s this formulation that’s wrong.

Notice that it appeals to laws of nature, but nowhere to causes.

But are there laws of nature? Not literally. Scientific ‘laws’ are (heuristically valuable) idealizations of the causal powers of objects.

This consideration enables us to avoid the natural necessity of p. Which is just as well, since we are committed to denying it in the electron/slit case by statistical mechanics and in my case by everyday experience.

I have the causal power sufficient to go D Street and the causal power sufficient to go F Street. Determinism properly understood won’t rule this out. Whichever way I go it’s not a miracle.


Garret Merriam: The emotion/reason distinction

The distinction between cold-calculating reason and hot-blooded emotion runs deep in Western thought. The distinction has caused hectic debates in moral psychology and philosophy of mind. It strikes us as obvious that the faculty we engage when doing math is a fundamentally different faculty than the one we engage when reading love poetry. So obvious, we assume there’s no good reason to doubt the distinction.

There’s good reason to doubt the distinction.

For starters the distinction is more prominent in Western thought than in Eastern. In classical Chinese philosophy the word xin refers to both the physical heart and seat of emotions, but also the locus of perception, understanding and reason. The closest approximate translation in English is ‘heart-mind.’ When conceptual categories blur across geographical boundaries that suggests the distinction might be a cultural artifact rather than a fundamental categorical one.

Functional neuroanatomy also casts doubt. While it’s common to refer to (so-called) emotional vs. rational ‘centers’ of the brain, closer examination shows our brains are not so neatly parsed. For example, the amygdala (traditionally an emotional center) is active in certain ‘cognitive’ tasks, such as long-term memory consolidation, while the prefrontal cortex (traditionally the rational center) is active in more ‘emotional’ tasks, such as processing fear.

The line between thinking and feeling doesn’t cut cleanly across cultures or brains. Perhaps this is because, rather than two fundamentally different faculties, there is instead a vague set of overlapping clusters of faculties that, upon reflection, resist a simple dichotomous classification.


Kyle Swan: Property owning democracy

John Rawls argued against wealth inequalities by arguing that they lead to political inequalities. The wealthy will use their excess wealth to influence political processes and game the system in their favor. Economists call this regulatory capture. To eliminate these political inequalities, eliminate economic inequalities.

But when we task the state to eliminate economic inequalities, we give it a lot of discretionary power to regulate our economic lives. This makes influence over political processes worth more to those who would game the system in their favor, giving them more incentive to capture it. The policies could backfire.

Rawlsians tend to invoke ideal theory here. They’re describing a regime where efforts to realize economic and political equality are implemented by cooperative actors who are in favorable conditions for compliance, so they can “abstract from...the political, economic, and social elements that determine effectiveness.” Policies don’t backfire in magical ideal-theory world.

Rawls can use idealizing assumptions if he wants, but he shouldn’t be so selective about it. For why do we need the state interventions associated with “liberal socialism” or a “property-owning democracy” in the first place? Well, remember, because the rich in “laissez-faire capitalism” and “welfare-state capitalism” use their wealth to game the system.

But this means that idealizing assumptions have gone away from his consideration of the disfavored regime-types. Otherwise, the wealthy there would be riding their unicorns to visit all the affordable housing they’ve built (or whatever), not trying to illicitly game the system in their favor. 


Russell DiSilvestro: Intuition and inevitability 

“It seems to me,”
The man said slowly,
“Your intuition’s no good.”

He quickly added,
“Nor mine, nor anyone’s,”
As if that helped things.

For if no one’s intuitions are any good
Why should I
Or anyone
Care
How stuff seems
To you?

Perhaps his point was just that
P
And not that
P because it seems to me that P.

But then why say it?

Perhaps he was just
Being conventional
And pragmatic
And friendly.

But then why believe him?

After all
Nothing is more unbelievable than
P
At least the way he said it.

At least
That’s how
It seems
To me.


David Corner: Reason is slave of the passions 

In the Treatise, Book II, Part III, Sec III, Hume argues that
reason alone can never be a motive to any action of the will; and secondly, that it can never oppose passion in the direction of the will.
I will focus on the second claim.

As one of my seminar students observed this semester, Hume qualifies this claim by providing an exception: Sometimes our passions are founded on false suppositions. An example: I suppose this glass to contain beer, and so I desire to drink it. When I judge that the glass actually contains turpentine, this desire vanishes

My desire to drink the contents of the glass is what TM Scanlon refers to as a “judgment-sensitive attitude.” Its judgment-sensitivity is like that of a belief; I revise my belief that the glass is filled with beer when I am given reasons for thinking that it is filled with turpentine. Indeed my desire to drink the contents of the glass seems entirely dependent on a factual judgment about its contents. Nearly all of what Hume calls “passions” are actually judgement-sensitive attitudes. The exceptions Hume cites would appear to be the rule.

Hume fails to see that the suppositions that provide the basis for most of our passions are really judgments, and that these judgments motivate us by providing reasons for acting- i.e. my motivation for drinking this liquid depends on reasons for thinking it is beer. The distinction between reason and passion may be more tenuous than Hume realizes.