Monday, March 26, 2018

A philosopher you recently discovered...

This week, faculty members write about a philosopher they recently discovered, and what they like about them.



One author who has come to my attention over and over again recently is Deborah Rhode, Ernest W. McFarland Professor of Law at Stanford University. She has written extensively on legal ethics, gender and the law, and other areas of law and ethics. I cited her articles critiquing the teaching of ethics in law schools in my article on service learning. I required students to read a chapter of her anthology, Ethics in Practice: Lawyers’ Roles, Responsibilities, and Regulation, in my professional ethics course. She co-edited, along with one of my former advisors, the hornbook on legal ethics (i.e., the textbook used to teach ethics in law schools). Most recently, I came across her book on discrimination on the basis of appearance, The Beauty Bias, as we were looking for guest speakers for this year's Nammour Symposium (we asked; she could not make it, but wished us all the best on this important topic). She is a lawyer by training, but students who are interested in legal ethics would do well to become familiar with her work (start with her article, Ethics in Practice, in the anthology mentioned above). 
Chong Choe-Smith


    I wanted to draw attention to a non-philosopher, the economist Peter Leeson, who’s written what amounts to a fun introduction to rational choice decision and game theory. The recent book, WTF?! is, as the book’s subtitle suggests, an economic tour of the weird. Some of the most outlandish practices in history have been solutions people hit upon for solving pressing social problems of their time and place.
    Leeson pushes the rational actor model to its limits, showing how seemingly senseless and/or barbaric practices -- like burning witches, selling wives, and holding trials by combat or ordeal and of animals -- were (and in some cases, are) sensibly grounded in expected benefits, given the set of prevailing beliefs and values and other constraints.
    But are old-timey superstitions relevant to us, the enlightened, in our situations today? Read this short piece by Leeson on polygraph tests and think about how trial by the ordeal of walking on red-hot ploughshares could have provided a similar sorting mechanism.
Kyle Swan


    Anna Marmodoro is an Italian metaphysician teaching at Oxford. I first discovered her while researching Anaxagoras’ theory of ‘homeomerous seeds,’ which seemed to me a gunk theory very advanced for its time (460 BC). Marmodoro’s book on Anaxagoras, Everything in Everything, confirmed this for me. This led me to other work of hers, in which her powers account of causality leads her to a neo-aristotelian ‘hylomorphic’ theory of objects.
    Since I had come to the conclusion that Galilean elementalism as an explanatory strategy, having produced the Scientific Revolution, has now run its course, much in the same way the medieval Aristotelian synthesis ran its course in the 16th century, I was interested.
    Marmodoro’s hylomorphism is ‘holistic’ in that matter and form are not parts or constituents of the substance. If the substance is composite, such as organism, its organs cease to have an independent existence but are ontologically subsumed by the whole. So substances like organisms are the fundamental reality, not the elements composing them.
    This constitutes an attractive solution to the metaphysical ‘Problem of Composition’: How are the elements in the Cap’n Crunch my grandson Matthew eats for breakfast related to Matthew? Are grandchildren emergent entities out of what they eat, thus rendering them either epiphenomenal or redundant? Or is ‘Matthew’ merely a heuristic concept, a way of thinking about stuff temporarily arranged a certain way. Either way, Matthew becomes dubious or derivative entity.
    I’m a realist about grandchildren; Marmodoro’s theory is one way of making sense of that.

Bibliography: 
Everything in Everything, OUP 2017
“Aristotle’s Hylomorphism, Without Reconditioning,” Philosophical Inquiry 36:5-22
Thomas Pyne

    David Hilbert claimed that mathematics was sloppy and should be made more rigorous by formalizing its theories in, say, predicate logic, and then showing both that a theory's truths are provable and that anything provable is true. Gödel's Incompleteness Theorem of 1931 upset Hilbert's program because it established that any proposed formal arithmetic must have unprovable truths. If we were to add this formal arithmetic's unproved truths as new axioms, the new arithmetic would have other unprovable truths. In 1950, Abraham Robinson, a Yale philosopher of mathematics, discovered the absolute minimum number of assumptions needed to carry out Gödel's proof. The elegant result is now called Robinson Arithmetic, which I am teaching this semester in Phil. 160.
    Robinson's second, original idea is that any proposed formal theory of real numbers will have two unintended consequences: (1) infinite numbers bigger than any real number, and (2) Leibniz-like infinitesimal numbers. This is a surprising limitation on the ability to formalize any science that uses math.
    Robinson's third, original idea was to create a calculus true to the spirit of Leibniz's idea that speed is an infinitesimal change in distance divided by an infinitesimal change in time. This is an easier-to-learn calculus than the kind taught at Sac State with epsilons and deltas. But very much like the metric system that cannot replace the entrenched English system of units in the U.S., U.S. universities resist the wholesale change to Robinson's nonstandard calculus even though they admit it is a more intuitive calculus.
Brad Dowden



    I’ve become acquainted with Tamar Gendler’s work fairly recently and I’m attracted to her notion of alief, which she introduces for the purpose of explaining belief-discordant behavior.
    Briefly, the question here is whether people can really believe something when their behavior clearly suggests otherwise. Can I really believe that airplanes are safer than cars when I am terrified of flying but not of driving? Can I really believe that men and women are equally intelligent when I routinely defer to the opinions of men? Can I really believe that turd-shaped chocolates are perfectly tasty when I am disgusted by the idea of eating one?
    Behaviorists say no. To believe something just is to act as if it is true. Behaviorists happily allow that people are often poor judges of what they believe. In cases like these we unwittingly report what we think we ought to believe rather than what we really do believe.
    But Gendler subscribes to a more traditional intellectualist view according to which what we believe is what we sincerely reflectively endorse; what we will tell other people when we want them to know what is true.
    Gendler argues that if this is the case then we need another notion to explain belief discordant behavior. What we believe is one thing, what we alieve is another. Alief is a concept that fits nicely within Kahneman’s concept of System 1 thinking: intuitive, associative, rapid and effortless forms of inference that lie largely beyond our voluntary control. 

Randolph Mayes


    Dallas Willard (1935-2013), a philosopher who taught at the University of Southern California, is someone I only met once, but read and listen to quite a bit. An idea he had, and I like, is that one’s personal philosophy will inevitably will be embodied in one’s actual life. So one way for each of us to evaluate a personal philosophy is to see how it seems to work out for individuals who believe it and embody it. Willard’s own life is one example of this, a nice glimpse of which can be seen from one page his family and friends have developed to honor him, especially its “about” section and a “tribute” by his USC philosophy colleague Scott Soames.
Russell DiSilvestro


    I was recently re-introduced to philosopher Eleonore Stump through this talk. I found it so interesting I decided to purchase her book, Wandering in Darkness: Narrative and the Problem of Suffering. I came across Stump some 20 years ago when she presented at a lecture series on faith and the problem of evil. These lectures, interestingly, were the first of her attempts to articulate what was later to become the book I am reading now.
    I am only on page 27, and there is no guarantee that I will finish it. From what I have read so far, I have really enjoyed. Stump writes that she prefers to frame the problem of evil as a problem of suffering, as it is suffering rather than evil or pain that can undermine the desires of our hearts.
Also, I really like this quote:

At its best, the style of philosophy practiced by analytic philosophy can be very good even at large and important problems… But left to itself, because it values intricate technically expert argument, the analytic approach has a tendency to focus more and more on less and less; and so, at its worst, it can become plodding, pedestrian, sterile, and inadequate in its task… (p. 24-25)
    I hope I make it to the end of the book. I think I can learn a lot by reading it. In the meantime, I’ve got to get through these midterm papers, some of which seem to be undermining the desires of my heart.
Dorcas Chung

I would recommend Porphyry of Tyre, who was, like Pythagoras, an advocate of vegetarianism on spiritual and ethical grounds. These two philosophers are perhaps the most famous vegetarians of classical antiquity. He wrote the On Abstinence from Animal Food (Περὶ ἀποχῆς ἐμψύχων; De Abstinentia ab Esu Animalium), advocating against the consumption of animals, and he is cited with approval in vegetarian literature up to the present day.
Clovis Karam
    

Cambridge was home for some of the most towering geniuses of the 20th century: Ludwig Wittgenstein, Bertrand Russell, John Maynard Keynes, G.E. Moore. But one name gets far less attention than it deserves: Frank Ramsey.
    A genuine polymath, Ramsey accomplished more by his death at the tragically young age of 26 than most intellectuals do in lives three times as long. He began learning German at 18, and by 19 produced the first English translation of Wittgenstein's Tractatus. Shortly after turning 20 he traveled to Austria and in the space of two weeks convinced Wittgenstein that there were fundamental flaws in his argument, prompting Wittgenstein to later return to Cambridge to set things right.
    The year after that he produced a new branch of mathematics (today called "Ramsey Theory") which pertains to how order is understood in mathematical structures. In the next two years he wrote two papers in economics that transformed thinking on taxation and savings, founding a new branch of the discipline now known as 'optimal accumulation.'
    His philosophical work was no less influential. His theory of truth (known as the 'Redundancy Theory') dissolved several problems that had haunted philosophers since Plato. His analysis of theoretical vs. observable terms in philosophy of science inspired both Rudolf Carnap and David Lewis. His work on subjective probability was a cornerstone for von Neumann's development of game theory.
    Try to imagine how all of these disciplines would have been transformed if Ramsey had not died shortly before his 27th birthday.
Garret Merriam

    I recently got to know the work of Diane Proudfoot. She is a professor of Philosophy at the University of Canterbury, in New Zealand. She works in several areas, including the history and philosophy of computer science, Turing, and philosophy of religion. I was looking for works in the philosophy of Artificial Intelligence and found her chapter “Software Immortals: Science or Faith?”, which is part of the book Singularity Hypothesis: A Scientific and Philosophical Assessment. We are witnessing a growing body of researchers talking about unsavory futures brought about by our accelerating technological progress. Some, like Oxford University philosopher Nick Bostrom, warns that human extinction is one of them. Futurist Ray Kurzweil predicts that digital enhancements will replace the messy mortal flesh in a few years. Since the robot-takes-control scenario has been a favorite of science fiction for a long time, it’s difficult to separate speculation from scientifically justified hypotheses. Proudfoot’s chapter was a refreshing and surprising reading for me, as a newcomer to the topic of digital minds and technological promises. She compares the promises of AI with those of religion. Both are supernaturalist proposals based on faith, she argues. And techno-supernaturalism, as she calls it, does not do a better work compared to old religions, especially as a way of dealing with humans’ fear of death. Techno-naturalism, she writes, “can be seen as a new-and-improved therapy for death anxiety, based on AI and neuroscience rather than on revelation”.
Saray Ayala-López

Sunday, March 11, 2018

Why I love money

I’ve been a professional philosopher for a while now, and one thing I have noticed is that few of my kind think much of money. In one sense this is to be expected and, perhaps, admired. Philosophy has always been associated with a concern for something larger and more meaningful than the accumulation of material wealth.

But the sense I have in mind is neither expected nor admirable. What I mean is that we seem unduly unfascinated by money. It's as if our distaste for a life lived in pursuit of money inhibits our ability to appreciate its philosophical significance: what it is, how it works, what it suggests about human nature and society. This is unfortunate because money may be the most powerful invention, the most intriguing entity, and the greatest force for human cooperation this side of God. Every philosopher should try to understand why.

Neither Plato nor Aristotle were huge admirers of money, but they thought about it enough to know that it emerged as a way of overcoming the limitations of barter. They knew the obvious limitation: In a barter economy Socrates may desire a massage from Epione, but Epione may desire no instruction in philosophy from Socrates. So, to get worked on, Socrates must fetch Alcibiades, who desires Socrates' services, and who will gladly send several jugs of wine to Epione in return.

Another impediment, perhaps more dimly appreciated, is that even if Epione were desirous, she wouldn’t know how much philosophy to charge. Purveyors in a barter economy have to consider the exchange rate between their goods and every other thing they may be willing to accept in return.

Money solves both of these problems. First, money can be used to represent the value of every other good. In a money economy, Epione doesn’t have to compute the value of a massage in units of philosophy. She just needs to state her fee. Second, everyone accepts money as payment. As Yuval Harari points out, this is because “Money is a universal medium … that enables people to convert almost everything into almost anything else.”

If money had never been created, human societies would probably have remained small and commerce between them cautious, limited and infrequent. Money economies facilitated routine transactions, hence growing levels of trust between complete strangers. It enabled societies to become vastly larger, more complex and capable of previously inconceivable levels of cooperation. In so doing, money replicates itself, causing ever more wealth to be created. All this glorious complexity occurred because money vastly simplified the computational tasks individuals needed to perform to exchange goods and services.

Granted that money does all these things, the question is how. What makes us accept money as payment in the first place?

A simple answer would be that the stuff of which money is composed has independent utility. This is the correct answer for some forms of “commodity money” used in primitive money economies. Wheat, tea, candy, cigarettes, and cacao beans have all have been used as money specifically because they have independent value to humans. The same explanation holds TIC for gold coins. As Cortés explained to Moctezuma, Spaniards suffer from a “certain disease of the heart” that only gold can cure.

But advanced economies don’t use commodity money, they use “fiat money.” To understand this, consider the scraps of paper we call dollar bills. It used to make sense to accept these as payment. They were essentially just government-issued IOU’s. Theoretically all of the currency in circulation was redeemable for gold.

Paper bills “store the value” of an existing and universally desired commodity, making it possible to exchange the commodity without having to transfer it physically. Of course, a system like this works only because those who participate in it believe the bills will be honored. (When the government bounces notes, all hell breaks loose.) So this variation on a system of commodity money both requires and fosters even greater levels of trust than before.

A system of fiat money emerges when this cord is cut; when paper, coins, and (now) bits of electronic data are no longer tethered to an existing commodity. This occurred in virtually all major economies during the 20th century. Governments still maintain reserves of gold, but it is officially just another good, not something that underwrites the value of their currency.

Almost all economists believe this was a positive development (though some politicians do not.) It seems to be basically working. But why should it? In the past, money was clearly tied to a material reality. Now it is as if money exists only insofar as we believe that it does. Again Harari:
Money is [fundamentally] a system of mutual trust, and not just any system of mutual trust: money is the most universal and most efficient system of mutual trust ever devised. 
Money, then, is one our most salient examples of an intersubjective reality, a set of entities, structures and processes whose existence and causal powers are palpable, but which would vanish into thin air in the absence of mutual trust and belief. Nations, cities, constitutions, corporations, schools, legal systems, rights, obligations, roles and privileges are all putative examples of such.

Intersubjective reality, first described by Kant, is to be distinguished from subjective reality (isolated in a single mind) and objective reality (mind independent). It is a philosophically intriguing category partly because it is difficult to decide whether its members (a) really exist in virtue of being believed to exist or (b) really do not exist, even though the mutual illusion that they do is useful in producing cooperative behavior.

The second interpretation seems clearly appropriate for some kinds of entities. Gods, for example, are imaginary entities whose value is best explained in this way. So are morals, at least to the extent that they are represented as the deliverances of gods.

But money seems distinctly different. It just seems crazy to deny the existence of something that makes the entire world go round.

G. Randolph Mayes
Department of Philosophy
Sacramento State

Tuesday, March 6, 2018

Learning Moral Rules


While evolutionary psychology has led to a proliferation of (often outlandish and essentializing) claims about innate human traits and tendencies, the view that human morality is innate has a long and reputable history. Indeed, broadly evolutionary accounts of morality go back to Darwin and his contemporaries. Views that posit innate cognitive mechanisms specific to the domain of morality (viz., moral nativism) are of more recent vintage. The most prominent contemporary defenders of moral nativism adopt a perspective called the “linguistic analogy” (LA), which uses concepts from the Chomskian program of generative linguistics to frame issues in the study of moral cognition.[1] Here, I present one of LA’s key data points, and propose an alternative, non-nativist explanation of it in terms of learning. 

The data point on which I’ll focus concerns the proposed explanation for certain observed patterns in people’s moral judgments, including in response to trolley cases. In the sidetrack case (fig. 1), most people judge that it would be permissible for a bystander to save five people by pulling a switch that would divert the trolley onto a sidetrack where one person would be struck and killed. However, in the footbridge case (fig. 2), most people judge that it would not be permissible for a bystander to save five people on the track by pushing someone bigger than himself off the bridge into the path of the train to stop it.





The results from cross-cultural studies of the trolley problems and similar dilemmas suggest that subjects’ judgments are sensitive to principled distinctions like the doctrine of double effect, where harms caused as a means to a good outcome are judged morally worse than equivalent harms that are mere side effects of an action aimed at bringing about a good outcome.[2]

To explain the acquisition of these implicit rules, LA invokes an argument from the poverty of moral stimulus. For example, Mikhail argues that to judge in accordance with the doctrine of double effect involves tracking complex properties like ends, means, side effects, and prima facie wrongs such as battery. It’s implausible that subjects’ sensitivity to these abstract properties is gained through instruction or learning. Rather, a more plausible explanation is that humans are endowed with an innate moral faculty that enables the acquisition of a moral grammar (which includes the set of these rules).[3]

I believe that other research from language acquisition and the cognitive sciences more broadly points to the availability of a different explanation of how these implicit rules could be acquired, via learning mechanisms not specific to the moral domain. Evidence suggests that children employ powerful probabilistic learning mechanisms early in their development.[4] With these mechanisms, children are able form generalizations efficiently, on the basis of what might otherwise appear to be sparse data.

Consider the following example of a study on word learning: 3-4 year old subjects who heard a novel label applied, for example to one Dalmatian extend the label to dogs in general.[5] When applied to three Dalmatians, subjects extend the label to Dalmatians only. In the latter case, though the data is consistent with both candidate word meanings (dog, Dalmatian), the probability of observing three Dalmatians is higher on the narrower hypothesis.

I propose that a similar process of inference could account for the acquisition of implicit moral rules. There may be sufficient information contained in the stimuli to which individuals typically are exposed in the course of their early development – including the reasoning and response patterns of adults and peers in their environment – to account for their ability to make such moral distinctions. Consider the act/omission distinction. Cushman et al. found that subjects judge in accordance with what they call the ‘action principle’, according to which harm caused by action is judged morally worse than equivalent harm caused by omission.[6] Children observe this distinction in action. A child may be chided more harshly for upsetting a peer by taking a cookie away from her than for upsetting a peer by failing to share his own cookies, for example. With the probabilistic learning under consideration, it may take surprisingly few such observations for children to generalize to a more abstract form of this distinction. Observing the distinction at play in a few different types of scenarios may be sufficient for a learner to generalize, and go beyond tracking the distinction just in the particular cases observed to infer a general model that could have given rise to data they have encountered.


Of course, further investigation is needed to comparatively assess these two proposals. I’ll end by noting that the debate over moral nativism has both theoretical and practical implications. If the non-nativist account is right, this points to a view of our capacity for moral judgment as more malleable and amenable to intervention and improvement than the nativist account suggests. On the other hand, some (though not all) take the nativist account, if correct, to invite a skeptical view about morality.


Theresa Lopez
Department of Philosophy
University of Maryland

[1]Dwyer, S., Huebner, B., and Hauser, M. 2010: The linguistic analogy: motivations, results and speculations. Topics in Cognitive Science, 2, 486–510.
[2] Hauser, M., Young, L., and Cushman, F. 2008: Reviving Rawls’ linguistic analogy. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2, The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA:
MIT Press, 107-144.
[3] Mikhail, J. 2011: Elements of Moral Cognition. Cambridge: Cambridge University Press.
[4] Xu, F., and Griffiths, T.L. 2011: Probabilistic models of cognitive development: Towards a rational constructivist approach to the study of learning and development. Cognition, 120, 299-301; Perfors, A., Tenenbaum, J. and Regier, T. 2011: The learnability of abstract syntactic principles. Cognition, 118, 306-338.
[5] Xu, F., and Tenenbaum, J. B. 2007: Word learning as Bayesian inference. Psychological Review, 114, 245–272.
[6] Cushman, F., Young, L. and Hauser, M. D. 2006: The role of conscious reasoning and intuition in moral judgment: testing three principles of harm. Psychological Science, 17, 1082-1089.

Sunday, March 4, 2018

Should the Washington Redskins Change Their Name?


A hot ethics topic in NFL football is whether or not the Washington Redskins should change their name in light of numerous requests for doing so from groups such as the National Congress of American Indians and the tribal council of the Cherokee Nation of Oklahoma. For, such groups consider ‘redskins’ to be a racial slur.

Records indicate that the first use of ‘redskins’ came in the mid-18th century, where Native Americans (NA) referred to themselves as ‘redskins’ in response to the frequent use of skin color identification by colonials in calling themselves ‘white’ and their slaves ‘black.’ In 1863, an article in a Minnesota newspaper used the term in a pejorative sense: “The State reward for dead Indians has been increased to $200 for every red-skin sent to Purgatory. This sum is more than the dead bodies of all the Indians east of the Red River are worth.” In 1898, Webster’s dictionary defines ‘redskin’ as “often contemptuous.” In 1933, the football team’s name was changed to ‘Redskins.’ Similar to the Oxford Dictionary, Dictionary.com writes: “In the late 19th and early 20th centuries…use of the term redskin was associated with attitudes of contempt and condescension. By the 1960s, redskin had declined in use; because of heightened cultural sensitivities, it was perceived as offensive.”

The main argument used to support the use of this name relies on opinion polls. The Annenberg Institute 2004 poll and the Washington Post 2016 poll show that 90% of NAs do not perceive the term to be offensive. These have been used by the team owner, Dan Snyder, and the NFL commissioner to defend the name.

However, one problem is with the questions posed. For instance, similar to the Annenberg poll, The Washington Post asked, “As a Native American, do you find that name offensive, or doesn’t it bother you?” Notice that it still could be that NAs understand the name to be morally wrong or racist, but they don’t find it to be offensive or bothersome. The word ‘offensive’ doesn’t necessarily mean morally offensive. Perhaps, NAs maintain a sticks-and-stones-can-break-my-bones-but-words-will-never-harm-me mentality. They are not bothered or, in other words, “offended” by the name as words will never harm them, but they do find it to be morally reprehensible. The question on the survey needs to use terms like ‘morally offensive’ or ‘racist’ when asking about subjects’ attitudes to the name. Without this, there are plausible alternate interpretations of the results, and any strong conclusion drawn from the study will be invalid.

Also, when uncovering someone’s moral viewpoint, to get their real judgment, it is important that subjects have all the relevant facts to the case. This is a standard practice in ethics, where one should have the relevant facts to a situation before making an actual decision on it. For, facts, like for a juror in a trial, can change one’s verdict. As aforementioned, the term, ‘redskins,’ is a dated term that since the 1960’s is not in common use due to its racist connotation. It could be the case that most NAs today are not familiar with its history. A more accurate survey attempting to discover this population’s real moral judgment on the use of this name first should provide an accurate and comprehensive history of the use of this word, such as that it was used as a racist term to promote genocide against NAs, as indicated above. Once one makes sure that subjects know the relevant history of the issue, participants then should answer the question as to whether they find the use of this name to be morally wrong. As this has not been done, the conclusions in the above studies are not justified.

Additionally, a word that is rooted in hatred and genocide should not be used so trivially as the name of a sports team regardless of what most NAs believe on the matter. There are acts like genocide that are so utterly vile that the relevant negative terms during that time associated with it, like ‘redskins,’ should not be used today in the same country for a sports team, proudly marked on fan gear, and uttered in cheers during games of entertainment. The same would hold if a German soccer team wanted to adopt the swastika as their symbol 100 years from now, where most German Jewish people in the future are morally ok with it. It still would be wrong and should not be done.

Finally, the historical context of the intention for giving a team such a name matters. Gilbert claims the team was so named in order to honor NAs in general and some NAs associated with the team. However, in a 1933 Associated Press interview, the then team owner said he changed the name simply to avoid using the city’s baseball team’s name. Given that the name was widely understood to be a derogatory term during this time as noted above, I take it that an underlying intention of the use of the name, as with most instances when a team or university adopts a NA name, is to draw on a negative stereotype of NAs as being something like savages that are wild, fearless, and warriorlike. They are savages in the way bears, lions, and other animals are that occupy the names of other teams. The intention is of using a racial stereotype. Whether one can foresee it or not, such a stereotype is harmful to NAs and also can limit what they’re perceived as being capable of, like being kind and intelligent. Hence, the name should be changed. Just as the intention to do good that unintentionally leads to bad consequences can at times be enough to absolve all blame, when dubbing a team name, the intention to use a racial stereotype that is in fact a racist one, whether one realizes it or not, can be all that is needed to affirm that the name should be changed.



John J. Park

Philosophy Department

Oakland University