Monday, May 9, 2016

Donald Trump and Republican lemonade

Neither do people pour new wine into old wineskins. If they do, the skins will burst; the wine will run out and the wineskins will be ruined.  ~Matthew 9:17

If you think my title and my use of this trope from Matthew’s gospel (with parallels in Mark and Luke) are my way of setting up a pro-Trump comparison—like “Trump is to today’s Republican party what Jesus was to ancient Israelite religion”—well, think again.

My title and this passage are meant to set up two things: one satirical, one serious.

Satire first. If you’ve been following the American news the past week, you may have noticed that the latest twist is that Mr. T is the “presumptive Republican nominee” for president, thanks to the good men and women back home in Indiana. (Indiana?! It’s enough to make a grown man cry. But that’s a subject for another post.)

As a result, many Republicans (though certainly not all)—even those who strongly opposed Mr. Thrasymachus vocally and recently—are trying to look for silver linings, put the best face on things, and so on (see exhibit P).

I call this game “Republican Lemonade.”

To echo, of course, that infamous urban rule of thumb, “when life gives you lemons, make lemonade.”

Now Republican Lemonade is a game that many Republicans (present company included) have played, to one degree or another, in past elections when moving from the primary to the general.

But this year, “Republican Lemonade” also has to overcome a sentiment which, in yet another strange example of art imitating life, is reflected in some tart lyrics from one of Beyonce’s newest songs (“Sorry”) on her newest album (yes, “Lemonade”)—

"Middle fingers up, put them hands high/ “Wave it in his face, tell him, boy, bye.”

I know, I know, she’s singing about a cheating lover. But doesn’t she capture well the sentiment many “conservative” “Republican” “voters”—many of them “evangelical” “Christians”—seem to have towards “their” elected representatives this year? (Explaining those scare quotes is a topic of several other posts. Sorry, I’m Not Sorry.)

So, one dilemma faced by the makers of Republican Lemonade this year is: why shouldn’t we take precisely the same Beyonce-like attitude to Trump and his followers that he and many of them took towards us?

One answer of, course, is a version of “if we don’t hang together, we’ll all hang separately.” Gotta beat back the worse threat in the general. And so on.

So, then, putting all this together, and coming back to the curious passage in Matthew that we began with, I offer this:

How to Make Republican Lemonade (2016 version):
Step 1. Toss a yuuuge rotten orange, uncut, into an old blender. 
Step 2. When the blender breaks and becomes a smoking heap by trying to cut through the orange skin, pour the contents into a cup and call it “lemonade.” 
Step 3. Advertise “lemonade” as The Lesser of Two Evils, or Better Than The Alternative, or something equally inspiring.
That’s the relevance of Jesus’ words here. The Grand Old Party is like the old blender / old wineskins. Mr. Tropicana is like the rotten orange / new wine. It’s hard not to see what’s happening as a big huge mess in which one thing busts up another and leaves both of them far worse off in the long run.

OK, that was somewhat cathartic. But is there a serious philosophical question, or point, here?

Well, I think so.

Question: is it not difficult, but ultimately valuable, for any politically inclined person to balance both personal consistency over time and personal integrity at given time?

I think the answer is “yes.”

I think you can see this no matter what your political party affiliation (or if you deliberately have none at all). But I leave it up to you to do the satire and seriousness you find most appropriate to your own station.

It’s difficult because there are both so many so-called “knowns”—such as your preferences, your moral and religious beliefs, your beliefs about the history of different people and different historical situations—and so many so-called “unknowns”—such as how other voters will act, how much the future behavior of candidates will resemble their past, and what circumstances tomorrow will bring.

But it’s valuable because each of us has to live with ourselves. Not just in the collective sense familiar to politics—“we have to live together”—but in the individual sense familiar to each soul—“I have to live with myself, look myself in the mirror, etc.”

Here’s my version of it for now: Mr. Trump and I have some things in common, but I’m still not committing to support or vote for him yet.

You’ve probably heard of “six degrees of separation” linking you and someone else (if not, Wikipedia will help). But since “separation” and “solidarity” are sometimes seen as opposites, I offer, at the risk of damning with faint praise…

My Six Degrees of Solidarity With Mr. Trump:
1. Member of the human family.
2. American citizen.
3. Professing Republican.
4. Professing Christian.
5. White.
6. Male.
1 is a low bar, but hey—it’s important to affirm that he is in this sense my “brother” (even if he may want to be Big Brother).

2 counts too—while it’s partly an accident (of birth), it’s also partly a continual choice (just ask Facebook’s co-founder Eric Saverin).

3 and 4 include the word “professing” in order to cut through a forest of political and theological timber in one swoop. (While my scare quotes above revealed my sympathy for this timber, the move I’m making here doesn’t require dealing with it.)

5 and 6? There are some who would like to make one or both of these into a reason to vote for or against someone. I, on the other hand, do not think either are either.

Bottom line? Even solidarity does not equal or entail support. Sorry.


Russell DiSilvestro
Department of Philosophy
Sacramento State

Monday, May 2, 2016

The Bigger Picture of Big Data

Data is everywhere: I wake up and check my disappointing sleep patterns; monitor my sluggish steps to the kitchen; before my morning French press I’ve generated at least 3 of the 4.5 billion-plus Facebook likes for that day; and with one click I’ve warned morning commuters about the wild turkey-caused traffic jam on Russell Blvd. In the modern world, information is plentiful, and, more importantly, predictively useful.

Bing Predicts has consistently predicted NFL winners based on information about team statistics, match-up dynamics, online searches and discussion, and even facts about weather conditions and turf type. What some might take to be trifling online discussion actually increases the accuracy of Bing Predictions by 5%. Useful for spring season prediction, we can now trace allergy zones based on tree planting information; and soon, you’ll be able to plot your sprint from pollen in live time.

Forbes reported on the increasing amount of data in the world:
  • 40,000 Google search queries every second
  • By 2020, we can expect about 1.7 megabytes of new information to be created every second for every human being on the planet.
  • And this one: At the moment less than 0.5% of all data is ever analyzed and used.
Does more data mean reliable data? Modern trends in data analysis that deal with heavy volume, velocity, variety, scope, and resolution can be grouped under ‘Big Data’ (Kitchin 2014). Some views on Big Data seem to suggest that the small blunders can be tolerated because a comprehensive trend is generated in the process (Halevy et al. 2009; Mayer-Schönberger and Cukier 2013). Suppose you’re using an app that reports live-time pollen spread through user clicks. As soon as a user experiences blaring sneezes, watery eyes, observes pollen clouds, etc. that user can click different options, warning others. The problem is that a small portion of the population is subject to the nocebo effect, where sheer expectation produces real physical symptoms (even with allergic reactions), thus creating bits of false positive data that mislead others about allergy trends. The solution is if there’s more data then the “true” trend (signal) pierces through while the false positives disappear into the background noise. This is exactly what is behind aggregation effects, where averaging multiple individual guesses produces reliable results, and errors cancel out (Yi et al 2012). 

One methodological concept driving the value of Big Data seems to be that data produces notable trends, even in the presence of small blunders caused by, e.g., the malfunction of individual data measures. (As I’ve mentioned in a previous post, multiple independent pieces of data are only useful if the data is generated by independent sources.) But there is more to Big Data methodology. Big Data methods aren’t merely about analytically sitting back, generating a groundbreaking algorithm, and letting the significant relationships come to surface. Rather, Big Data is a hands-on process that consists of data gathering, classification, and analysis. One of the notable features of the Big Data process is the role of, what I call, ‘selection.’ Since Big Data doesn’t provide a comprehensive picture of a system (See Callebaut 2012), scientists have to figure out what to focus on and what to ignore at every step in the Big Data process. This means that it matters not only how much data we have, but also how we select at the data-gathering, classification, and analysis stages.

Broadly, I use ‘selection’ to refer to scientific engagement with limited system variables—whether this is at the sampling, instrumentation, or modeling level (see van Fraassen 2008). Selection in data gathering is when information about certain variables is not recorded. For example, Kaplan et al (2014) describe that social and behavioral choices—e.g., the consumption of fruits/vegetables—are only recorded 10% of the time in electronic health records (EHR’s), even though such choices are empirically linked to relevant medical conditions (343). Selection in data classification limits the type and amount of categories used to sort the data. When filling out an online survey is there an option for ‘indifferent’? When filing an insurance claim, is there a category for (unprovoked) moose attack? Information can be lost when using classificatory systems that are missing categories or contain vague/ambiguous categories (e.g., the use of ‘inconclusive’ as a category) (Berman 2013).

Selection at the analysis level requires finesse in working with data sets. The analyst engages Big Data by selecting relevant data sets and matching variables between sets to perform the proper statistical comparison. Selecting sets and matching occurs in prospective and retrospective designs, so this is not a new problem. However, methodological transparency (e.g., knowing the methods used to generate the data) is often limited in Big Data contexts because this requires that we ask “preanalytic” questions without available answers (Berman 2013, 95). Furthermore, without methodological transparency, our understanding of relationships between variables is limited. 

Suppose that scientists aim to find the parasite responsible for some physiological condition P by analyzing data from brain samples from different regions and decades. Analysts find data on brain samples with P and brain samples without P, and then compare the samples to limit possible parasite culprits. To better match our groups for confounds we can ask specific questions: What is the process of selecting the individual samples? Are all brain samples taken post-mortem? What is the process of preparing the sample for measurement analysis? At the analytic level, this information is lost. One major difficulty emerges. We have limited information about possible error sources, such as, selection bias and laboratory techniques that can potentially alter intrinsic components of the tissue. This is problematic because if the experimental and control groups have different error sources, we may mistake confound-driven frequency differences for statistically significant ones.

While transparency poses a problem, the aim of this picture is not to make us skeptical about Big Data, but rather to shift our focus on the Big Data process. Reliable results aren’t just about aggregate outcomes. They’re about careful selection at each step in the data process.


References

Callebaut, Werner. 2012. “Scientific perspectivism: A philosopher of science’s response
to the challenge of big data biology.” Studies in History and Philosophy of Biological and Biomedical Science 43(1):69-80.

Berman, J.J. 2013. Principles of Big Data: Preparing, Sharing, and Analyzing

Complex Information (1st ed.) San Francisco, CA: Morgan Kaufmann Publishers Inc.

Halevy, A., Peter N., and Fernando P. 2009. “The Unreasonable Effectiveness of Data.”
IEEE Intelligent Systems 24(2):8-12.

Kaplan, R., Chambers, D., Glasgow, R. 2014. “Big Data and Large Sample Size: A

Cautionary Note on the Potential for Bias,” Clinical and Translational Science 7(4). DOI: 10.1111/cts.12178.

Kitchin, R. 2014. “Big data, new epistemologies and paradigm shifts,” Big Data and
Society 1:1-12.

Mayer-Schönberger, V., and Cukier, K. 2013. Big Data: A Revolution That Will

Transform How We Live, Work, and Think. New York: Houghton Mifflin Harcourt Publishing Company.

Yi, S. K. M., Steyvers, M., Lee, M. D., and Dry, M. J. 2012. "The Wisdom of the Crowd
in Combinatorial Problems," Cognitive Science 36(3). doi:10.1111/j.1551- 6709.2011.01223.x.

Vadim Keyser
Department of Philosophy
Sacramento State