The Turing Test has many versions. Here is one. It involves a contest in which the contestant is placed in a temporarily sealed room. The contestant is either a computer or else human who understands Chinese, though which one is initially unknown to the judge of the contest. The room serves as a black box except that the room is connected via the Internet to the judge whose job it is to ask written questions of the inhabitant of the room and then on the basis of the answers guess whether the room contains a computer. The judge is required to send messages electronically into the room that are written in Chinese. The contestant will send back written responses. The judge who does understand Chinese can get outside assistance from experts.
The test is passed by the computer if during a long series of trials the judge cannot correctly identify the computer at least 70% of the time. Turing’s idea is that passing the test is a sufficient condition for understanding Chinese, though not a necessary condition, provided the judge does his or her best. The computer will have to give false answers to questions such as the Chinese version of, “Are you a computer?” and “How would you describe your childhood?” and “Do you know the cube root of 1043- 3?”
This Turing Test is considered an excellent test not only by behaviorists but also by philosophers of mind who favor functionalism. This is because functionalists believe having a mind consists merely in having one’s parts function properly, regardless of whether those parts are made of flesh or computer chips. According to functionalism, understanding language can consist only in having the ability to manipulate symbols properly. This idea that physical constitution is unimportant is called “multiple realizability” in the technical literature. Functionalism with its endorsement of multiple realizability is the most popular philosophy of mind among analytic philosophers.
John Searle, a philosopher of mind currently at U.C. Berkeley, has offered a Chinese Room Argument that I consider to be an effective refutation of functionalism because it serves as a refutation of the idea that language understanding can consist solely of symbol manipulation by a computer. Suppose, says Searle, that we had a computer program that can pass the Turing Test in Chinese. Then the functionalist must say the machine truly understands Chinese and so has a mind. But, says Searle, although he himself understands no Chinese, he could in principle step into the room of the Turing Test, replace the computer that passed the test, follow the steps of the computer program as they are described in English and do anything the computer program does when it processes Chinese symbols. Written Chinese symbols sent into the room by the judges would be processed more slowly by him than by the computer, but speed isn’t crucial for the test, and there could be a redesigned test that controls for speed of answering. In short, you don’t need to understand Chinese in order to pass the Turing Test in Chinese. So, functionalism is incorrect.
No, responds Daniel Dennett who is a well-known philosopher of mind and defender of functionalism. Searle is fooling himself. He may not consciously realize that he himself understands Chinese, but unconsciously he shows that he does. If the room passes the Turing Test for Chinese with Searle sitting inside doing the work of the computer program, then the room as a whole system understands Chinese even its Searle-part says it doesn’t. Similarly we readers of this blog understand English even if our liver knows no English. This response by Dennett is called the Systems Reply to the Chinese Room Argument. It is the favorite response of the functionalist to Searle’s attack on functionalism.
To speak for Searle here, I believe the Systems Reply fails to save functionalism. I am not claiming that a machine couldn’t pass the Turing Test. A cyborg might pass. I know for sure that a machine can pass the test. I myself am a physical machine; but the philosophically key point is that I do not understand language just because I am an effective symbol processor. Mere symbol processors can’t know what they are doing. I understand language because I am an effective symbol processor with a flesh and blood constitution. Biochemistry is crucial for understanding language, even for understanding that a certain pattern on a computer screen is a meaningful Chinese symbol. Almost all neurobiology researchers appreciate this and thus pay close attention to biochemistry as they make their slow advance toward learning the details of how the brain causes the mind. Functionalists do not appreciate this.
Yet every example we have of an entity that understands language is made of flesh and blood, of meat. (Yes, I am a “meat chauvinist.”) I don’t say I know for sure that meat is required for understanding language, nor how to solve the mystery about meat that enables it to realize a mind (surely not every feature of the meat is essential), but I believe there is enough evidence to say we do know that meat, or at least something with a meat-equivalent-biochemistry, is needed; and we know that any neuroscience researcher who ignores this fact about biology is not going to make a breakthrough. Yet, Dennett and the other functionalists are committed to the claim that they know meat is not required. That’s their big mistake.
Brad Dowden
Department of Philosophy
Sacramento State