When asked to give an example of an analytic truth, something that simply must be true no matter what, a clear go-to example is 2+2=4. No matter what happens, no matter what experience may tell us, we can rest assured on the unassailable truth that putting two together with two will always yield four. The only problem is, it isn’t always the case that 2+2=4. The world in which we live does not always yield itself to such simplistic precision.
For example, you can add 2 cups of liquid with another 2 cups of liquid and get a grand total of 3.8 cups of liquid. (You can see a simple illustration of this here.) How is this possible? It can happen when the first two cups of liquid are water and the second two cups are 99% rubbing alcohol. The alcohol molecules are smaller than the water molecules, and hence fit in the 'negative space' between the water molecules (think of sand filling in the negative space in a mason jar full of ping-pong balls. 2 cups, plus 2 cups equals 3.8 cups.
Now I know what you're thinking: 'of course 2 and 2 of DIFFERENT THINGS won't necessarily equal 4 of the same thing. 2 + 2 is only guaranteed to equal 4 when you are adding the SAME THING together.' Okay, fair enough, but that just gives rise to a key question: what, exactly, does it mean for two things to be 'the same thing'? Clearly two cups of water and two cups of alcohol are not 'the same thing' for our current purposes, but when can we say that two things are, indeed, the same thing?
Consider just water for a moment. Any two cups of water will be different in myriad ways from any other two cups of water; they will have slightly different levels of trace minerals and other atoms besides hydrogen and oxygen. No matter what your filter company might tell you, in the real world, there is no such thing as 'pure water', and as such, no two cups is ever 'the same' as another two cups.
I suspect that some of you still will be thinking that I've missed the point. When we say '2+2 = 4', we're not talking about two OF anything; we just mean TWO, the abstract, the number, not a measurement of things in the world, but simply the concept of 'one and one.'
Again, this is a fair retort, but also again this idea needs to be unpacked. What exactly IS 'the number two' when it is abstracted away from things in the physical world? There is a huge and historic literature on the ontology of numbers, which I cannot begin to summarize here. Suffice to say, the view of numbers expressed in the above objection is likely some kind of 'mathematical realism', the most popular variety of which is 'mathematical Platonism'. This view, roughly stated, claims that numbers are real, just as real (if not more so) than material objects, and they endure in their own 'plane' of existence separate from the physical.
According to it's champions, mathematical Platonism is the only way we can truly make sense of the idea that '2+2=4 is true', precisely because the non-Platonist alternatives are susceptible to the kind of counterexample that I opened up with. Non-Platonist views are limited to saying '2+2=4' is only 'mostly true', 'approximately true', or 'true in some contexts'.
Personally, I don't find this a terribly hard bullet to bite. For me, it's a lot easier to bite than the idea that there really is an eternal, immaterial realm of numbers (and possibly other categories of non-physical things) that transcends, yet at least loosely applies to the physical world. This is, of course merely an appeal to my personal intuition, an argumentative strategy that I don't place too much stock in, but if nothing else it should suffice to give the Platonist-sympathizer pause. Platonists have more than just their appeal to incredulity in their quiver, of course, but I suspect that said incredulity drives more people towards Platonism than are probably comfortable admitting.
I think this temptation towards mathematical Platonism should be resisted. One way to do so is to reflect on how science education often sweeps situations where 'the math doesn't add up' (like my opening example) under the rug. Anyone who has taken a high school science class is familiar with the concept of 'significant digits', which allow scientists to ignore potential differences in measurement that go beyond either their instruments' capacities, or the particular needs of the given experiment.
There's nothing wrong with appealing to significant digits, of course; infinite precision of measurement isn't possible, so for practical purposes it makes sense to just, at a certain point, round things off. But when we do that we need to recognize that our inability (or our not caring) to measure infinitesimally small differences doesn't mean they're not there. Richard Feynman famously compared the accuracy of the predictions of quantum physics to specifying the width of North America to within the length of a single human hair. That is an utterly astounding finding, of course, but it only underscores the point I'm trying to make: if you add 2 hairs and 2 hairs to another 2 and another 2 and another 2... pretty soon your margin of error will add up. In quantum mechanics 2+2 might equal 4, but 2 trillion + 2 trillion might only equal 3,999,999,999,999.
Perhaps things are different in the world of Plato's forms, but the world in which we live is not mathematically precise. Most of the time this impercision doesn't matter, and we can safely ignore it. But just because we can ignore it doesn't mean it's not there. In this world, 2+2 only equals 4 when we decide that we don't really care about the microscopic difference between the substances in question.
Garret Merriam
Philosophy Department
Sacramento State