Sunday, April 27, 2014

Could a computer ever have a mind?

Yes, a computer could have one, said Alan Turing in 1950, when he devised his famous Turing Test. He wanted a behavioral test for the presence of mind that would not prejudice the outcome by considerations about whether the thing possessing the mind has a human shape or a human voice or human biology. Turing wanted to avoid having to define mind itself, but thought that his test, which is a test for language understanding, is so difficult to pass that if a computer program did pass it, then everyone would agree that it really understands language and so has a mind.

The Turing Test has many versions. Here is one. It involves a contest in which the contestant is placed in a temporarily sealed room. The contestant is either a computer or else human who understands Chinese, though which one is initially unknown to the judge of the contest. The room serves as a black box except that the room is connected via the Internet to the judge whose job it is to ask written questions of the inhabitant of the room and then on the basis of the answers guess whether the room contains a computer. The judge is required to send messages electronically into the room that are written in Chinese. The contestant will send back written responses. The judge who does understand Chinese can get outside assistance from experts.

The test is passed by the computer if during a long series of trials the judge cannot correctly identify the computer at least 70% of the time. Turing’s idea is that passing the test is a sufficient condition for understanding Chinese, though not a necessary condition, provided the judge does his or her best. The computer will have to give false answers to questions such as the Chinese version of, “Are you a computer?” and “How would you describe your childhood?” and “Do you know the cube root of 1043- 3?”

This Turing Test is considered an excellent test not only by behaviorists but also by philosophers of mind who favor functionalism. This is because functionalists believe having a mind consists merely in having one’s parts function properly, regardless of whether those parts are made of flesh or computer chips. According to functionalism, understanding language can consist only in having the ability to manipulate symbols properly. This idea that physical constitution is unimportant is called “multiple realizability” in the technical literature. Functionalism with its endorsement of multiple realizability is the most popular philosophy of mind among analytic philosophers.

John Searle, a philosopher of mind currently at U.C. Berkeley, has offered a Chinese Room Argument that I consider to be an effective refutation of functionalism because it serves as a refutation of the idea that language understanding can consist solely of symbol manipulation by a computer. Suppose, says Searle, that we had a computer program that can pass the Turing Test in Chinese. Then the functionalist must say the machine truly understands Chinese and so has a mind. But, says Searle, although he himself understands no Chinese, he could in principle step into the room of the Turing Test, replace the computer that passed the test, follow the steps of the computer program as they are described in English and do anything the computer program does when it processes Chinese symbols. Written Chinese symbols sent into the room by the judges would be processed more slowly by him than by the computer, but speed isn’t crucial for the test, and there could be a redesigned test that controls for speed of answering. In short, you don’t need to understand Chinese in order to pass the Turing Test in Chinese. So, functionalism is incorrect.

No, responds Daniel Dennett who is a well-known philosopher of mind and defender of functionalism. Searle is fooling himself. He may not consciously realize that he himself understands Chinese, but unconsciously he shows that he does. If the room passes the Turing Test for Chinese with Searle sitting inside doing the work of the computer program, then the room as a whole system understands Chinese even its Searle-part says it doesn’t. Similarly we readers of this blog understand English even if our liver knows no English. This response by Dennett is called the Systems Reply to the Chinese Room Argument. It is the favorite response of the functionalist to Searle’s attack on functionalism.

To speak for Searle here, I believe the Systems Reply fails to save functionalism. I am not claiming that a machine couldn’t pass the Turing Test. A cyborg might pass. I know for sure that a machine can pass the test. I myself am a physical machine; but the philosophically key point is that I do not understand language just because I am an effective symbol processor. Mere symbol processors can’t know what they are doing. I understand language because I am an effective symbol processor with a flesh and blood constitution. Biochemistry is crucial for understanding language, even for understanding that a certain pattern on a computer screen is a meaningful Chinese symbol. Almost all neurobiology researchers appreciate this and thus pay close attention to biochemistry as they make their slow advance toward learning the details of how the brain causes the mind. Functionalists do not appreciate this.

Yet every example we have of an entity that understands language is made of flesh and blood, of meat. (Yes, I am a “meat chauvinist.”) I don’t say I know for sure that meat is required for understanding language, nor how to solve the mystery about meat that enables it to realize a mind (surely not every feature of the meat is essential), but I believe there is enough evidence to say we do know that meat, or at least something with a meat-equivalent-biochemistry, is needed; and we know that any neuroscience researcher who ignores this fact about biology is not going to make a breakthrough. Yet, Dennett and the other functionalists are committed to the claim that they know meat is not required. That’s their big mistake.

Brad Dowden
Department of Philosophy
Sacramento State

43 comments:

  1. Brad, thanks very much for this interesting post. I have always been a little reluctant to regard Searle's thought experiment as a definitive way of testing Turing's proposal. As I read the essay,Turing was primarily interested in the question whether machines can think, not specifically whether machines understand a natural language. I'm inclined to think that the Turing test overdetermines what is required for thinking, unless what we mean by thinking is "thinking in a language." But that would mean that animals don't think and that the vast majority of unconscious inferencing our own brain does probably isn't thinking either. For me, even a system that doesn't itself understand its linguistic inputs and outputs is doing something I would be happy to call thinking if it manage these appropriately.

    Regarding linguistic understanding itself, I'm definitely more on Dennett's side of this debate. I don't think there is much value to being a meat chauvinist because we already know how to use human DNA to store information and do computations. But implementing a DNA based system that passes the Turing test won't satisfy people who oppose functionalism if they still regard it as implementing the 'mindless' algorithms that Searle thinks can not be sufficient to produce linguistic understanding.

    I think when Searle conceived the Chinese Room argument, most people thought it was pretty plausible that a machine would pass the Turing test within the next 30 years, but the surprise is that nothing has really come remotely close. The movie "Her," makes it seem as if we are on the verge of something like this, but we just aren't. To me, that, rather than Searle's apriori case, is what we should take seriously. Searle, I think, would say that even if a machine passed the Turing test it wouldn't be thinking, and the reason is that it doesn't understand, or grasp the meaning of what it is saying. I would have preferred that he simply made a prediction: Machines will not pass the Turing Test until they can understand what they are saying. I think, on the basis of the poor track record of AI so far, it's reasonable to believe that this is correct, that anything that actually does pass the Turing test will understand what it is saying. But the problem is that we don't know what the physical/structural requirements of an understanding machine are. Right now it looks like some pretty significant insights into how our brain works will be required before we do.

    ReplyDelete
  2. Interesting, I would've thought that Professor Dowden would fall on the other side of this debate.

    I've always thought that Searle severely underestimated what it would take for a machine to be an effective "symbol processor". What would such a program even look like? Surely it wouldn't be a simple algorithm exchanging text-based inputs for text-based outputs. It would have to be much more dynamic than that, processing more than merely textual symbols. For example, it would have to be able to process things like homonyms, context, etc., through visual symbols (such as images and video) and auditory symbols (such as vocal inflections, pacing, or volume) - both of which many of us do with our various devices on a daily basis. It would then have to relate these to a database of previously stored (i.e. learned) symbols for context, as well as estimating a value for the accuracy of the symbols' usage, both individually and as strings (in order to deal with things like sarcasm, mood, or other context-based factors that could affect meaning). Now if Searle went into that room and utilized these complex algorithms with his own mind in order to perform these function, I don't think he could as confidently say that he doesn't understand Chinese.

    ReplyDelete
    Replies
    1. Stephen, you make some interesting remarks about how many complex features about language would be required for a machine to pass the Turing Test. All those features you mention would be needed. However, any computer is at its heart nothing but a Turing machine, whose key feature is its input-output table. If Searle steps into the room where a symbol-processing computer has passed the Turing Test and then proceeds to memorize that table, Searle would pass the Turing Test, too. Why can't we trust Searle that memorizing the table which is described to him in English wouldn't give him Chinese language understanding?

      Delete
    2. Thanks for the reply professor! So I think my main contention was that any form of program would have to have much more complicated operations than a sort of static "input-output" table. The program would have to dynamically respond to inputs/stimuli, based upon relating those inputs to a database of what it has previously learned - as well as amending that database and its values as it continues to learn from a sort of "feedback loop" from its input/output interaction with the hypothetical judges. Now, this has become much more involved than simply having Searle memorizing a table. It is an entirely interactive learning process. For Searle to truly step into the Chinese room and process symbols in the same fashion as the computer, the database would have to be uploaded into his mind (so that he "knows" everything the program has stored) as well as the complex algorithms that the program executes. My point is this: Isn't this pretty much how we understand/process language as humans anyway? I'm thinking about what happens in my mind when someone says "That's a cool looking tree", and apriori, a sufficient explanation is that I am receiving those linguistic symbols and relating them to my memories (analogous here to a SQL or NoSQL database here) about how those symbols have been used in the past.

      Delete
  3. Randy, you say many animals think but couldn’t pass the Turing Test. I agree with you that it is not necessary for them to pass the Turing Test; but then Turing is not speaking to that issue. He is merely trying to give sufficient conditions for thinking, not any necessary conditions. You also make the similar point that the Turing Test “overdetermines what is required for thinking.” I agree with you. To repeat, Turing wasn’t intending to say what is required for thinking, but only to get you to agree that, if something passes the test it thinks.

    You wonder why he’s only testing for language. Well, the test is designed to test language understanding because Turing believes, and I agree with him, that nothing could successfully understand language without having a mind. I just don’t agree that computation alone is sufficient either for language understanding or even for knowing that a certain shaped squiggle on a computer screen is a Chinese symbol. This is not an a priori argument, as you say, but is an empirical one based on looking at the track record of trying and failing to solve the Turing Test and looking at the record of which things on Earth have a mind and which don’t.

    We have an additional disagreement that appears when you say, “I think, on the basis of the poor track record of AI so far, it's reasonable to believe that...anything that actually does pass the Turing test will understand what it is saying.” I recommend your softening this by revising the word “anything” and instead saying that anything that isn’t a mere symbol processor and yet actually does pass the Turing test will understand what it is saying. The Chinese Room Argument shows that if a mere symbol processor passes the test it won’t know the meaning of anything.

    And here is another recommended revision. You say, “Machines will not pass the Turing Test until they can understand what they are saying.” I think you should say that nearly all machines that can pass the Turing Test will not do so until they can understand what they are saying. Searle’s Chinese Room Argument has shown how in principle it could be possible to pass the Turing Test without understanding.

    I don’t believe a computer program by itself will ever pass the Turing Test. Ever. But if I am mistaken about this, then I will praise the program and the programmer. But the program won’t know what it’s doing; it won’t know what anything means.

    About the “Her” movie problem, I agree with you that we aren’t close to having a knowledgeable computer as in the movie, but let’s ask why there is such a problem. We are not close for some reason other than simply that programmers aren’t smart enough to do the symbol manipulation, as Dennett believes; we aren’t close for that reason but also for a second reason--we haven’t figured out how to incorporate into a robot or artificial machine what it is about meat that enables knowledgeable beings to be knowledgeable. It will take more than a computer program to pass the test. It will take a robot with meat, and when this thing passes the Test I will agree with Turing that it has a mind.

    ReplyDelete
  4. Brad, thanks, that's an interesting response. I haven't read Turing's paper in a while, but my recollection both of the paper and, of the behaviorist era in which it was written is that Turing would not have attached any real content to the claim ""nothing could understand language without a mind." Superficially, at least, that sounds like there is an entity, the mind, that is explaining the capacity for this behavior. My view is that what Turing was doing was operationalizing thinking, proposing a test that would be sufficient for the existence of thought, and which would have the effect of blocking anyone from saying something like "well that can't be thinking, because the machine doesn't even have a mind!"

    I also am a little reluctant to accept your recommended revisions to my original claims, though perhaps you are right, I certainly wasn't being very careful. But the reason I am reluctant is that I don't believe that Searle's Chinese room argument shows that it is in principle possible to pass the Turing Test in this way. I think it defines a system that would instantiate a Turing machine, if it actually worked, but the problem is that the book of instructions may simply be in principle impossible for a natural language, which has infinite capacities. Searle didn't need the Chinese Room to be in principle possible, he just needed to say that if it were possible, then such a system would still not understand language.

    In the end, I think we are making the same guess about the capacities of computer programs, but I'm not sure we have the same explanatory ideas. You have two explanatory claims working here. One is that a Turing machine won't pass the Turing test because it doesn't have a mind, and the other is that it won't pass it because it won't know what anything means. My view is that we aren't saying anything interesting when we say those those. They aren't explanations at all. The word 'mind' and 'means' are just placeholders for stuff we don't understand yet. I think it's ok to use terms as placeholders in this way, but we should pretend we are giving alternative explanations.

    My own suspicion is that computer programs won't be able to pass the Turing Test because the methods they use are based on probabilities of words and phrases being associated, which derive from a data base on past use. This doesn't seem like it could ever be adequate for linguistic creativity, which all humans possess, and which I think we use even in ordinary conversation.

    ReplyDelete
    Replies
    1. Randy,
      The person I was trying to attack in my original remarks is the functionalist who accepts the strong A.I. thesis that the appropriately programmed digital computer with the right inputs and outputs would thereby have a mind in exactly the sense that human beings have minds.
      You say my explanatory claim is, “a Turing machine won’t pass the Turing Test because it doesn’t have a mind.” I did not make this claim, and I do not believe this claim. And I don’t believe it is explanatory, and I wasn’t trying to explain why a Turing machine will or won’t pass the test. And I don’t claim a mind is required, or even flesh and blood, for passing the test. What I do believe is that flesh and blood or something with a similar biology is required for having a mind. I’m not making any claim about what is required to pass the Turing Test, although I do have opinions on this that I have not mentioned. So, I’d say you are attacking someone else, not me.
      You also say my other explanatory claim is that a Turing machine (aka a computer), “won’t pass the test because it won’t know what anything means.” Again, I’m not making the claim that a Turing machine won’t pass the test, and I’m not making any claim that it won’t pass the because of this or that. My own claims are about what we should think if a machine does pass it. I think you have an opponent in mind, but I believe I’m not that guy.
      Despite all this, we agree on some things. I agree with you that Turing himself would not have attached any real content to the claim “Nothing could understand language without a mind.” However, I do believe this even though Turing doesn’t. Turing himself was happy to have behavioristic operational definitions for any mentalistic terminology like “mind” and “thought.” Nevertheless, cognitive science has progressed to the point where we can say that having a mind does help explain why humans behave the way they do; it’s why they don’t behave like soda machines or like pendulums. But just saying they have a mind doesn’t by itself explain very much. The behaviorist will say human behavior can be explained in terms of other behavior plus the laws of behaviorism, without any appeal to mental entities. I think we can agree that the behaviorist can’t be successful in this project. There must be appeal to mental entities.
      I also agree with how you describe Turing’s purpose of proposing a test that would be sufficient for the existence of thought, and which would have the effect you mention.
      You say, “computer programs won’t be able to pass the Turing Test because the methods they use are based on probabilities of words and phrases being associated….” Well, there are different methods. If a programmer tries to get his or her computer program to pass the Turing Test using the old methods that were based on the probabilities in the way you suggest, then I’d agree with you that the program won’t be very successful.
      Another place where we agree is that at the moment we know so little about mind and meaning that it is reasonable to say “the word ‘mind’ and ‘means’ are just placeholders for stuff we don’t understand yet.”

      Delete
    2. Brad, I, like you, don't attack people, I examine positions..

      I attributed the explanatory claims to you because you said "Turing believes, and I agree with him, that nothing could successfully understand language without having a mind," and "but the program won’t know what it’s doing; it won’t know what anything means.". in your reply. Those sure sounds explanatory to me! But I understand now that they were not intended to be.

      I think we agree on most things. I'm not sure about the importance of explanatory appeal to mental entities, just because I'm not sure about the nature of the commitment you think is involved. Tom, e.g., believes that we need to make essential and irreducible reference to how a thing appears in order to explain the behavior of humans and animals. I think we don't know that. What do you think?

      Delete
    3. Randy, I'd say it's a mistake to say we need to make essential and irreducible reference to how a thing appears in order to explain the behavior of humans and animals. Science has moved beyond Aristotle here.

      Regarding the importance of mental entities, don't you agree Noam Chomsky showed that behaviorism will not successfully account for behavior and that we need cognitive science, instead?

      Delete
    4. Brad, yes, I agree that behaviorism is not an adequate explanatory framework. My hesitation is probably more terminological than anything. I'm comfortable attributing representational and information processing capacities to brains. I'm also comfortable explaining these capacities by reference to mental models, but I regard a mental model as an abstraction, rather than an entity that exists in the brain somehow. However,information increasingly strikes me as ontologically basic,and I really haven't processed the implications of that.

      Delete
  5. Hi professor Mayes! Just a short thought with regards to your last paragraph. Don't we ourselves operate on probabilities in order to understand linguistic creativity? For instance, if someone said "That movie was bad!" (when in fact meaning that the movie was good, and assuming I have never heard the word bad used in that context), it would seem I am operating under some sort of subconscious probability algorithm to assess the meaning of that phrase as stating that the movie was indeed distasteful. However, after learning of this new usage, in the future when someone uses the word "bad", the probability that their meaning is in fact distasteful has lowered based on the new information that I have stored (or symbols that I have processed).

    However, perhaps I am missing your point (which, coincidentally enough, would mean my probability algorithm has misguided my understanding), and you meant that machines would never be able to invent new forms of linguistic creativity. Yet, this does not seem to undermine one's capacity to understand language. Surely there are many humans who are just plain bad at coming up with catch phrases, jokes, ironies, sarcastic remarks, and the such. I would even go further to suggest that the model of probabilities and databases doesn't preclude one from successfully creating a new linguistic occurrence. I would think the hindrance for a machine here is rather the motivation to invent, not their ability...but then I would categorize this more as an absence in their capacity for personal identity rather than their capacity for understanding language.

    This ended up being not as short as I imagined...which I suppose tends to happen when writing about philosophy!

    ReplyDelete
  6. Also, here is an interesting tech company making strides towards developing truly intelligent AI:

    "Vicarious AI passes first Turing Test: CAPTCHA"

    http://news.vicarious.com/post/65316134613/vicarious-ai-passes-first-turing-test-captcha

    ReplyDelete
  7. Hi Stephen, I'm not really being very clear about that. Yes I think you are right that probability judgments of one kind or another underlie our ability to discern the meaning of what people say. We need to use context to resolve both vagueness and polysemy, and context supplies us with probabilistic information concerning what interpretation that is likely to be correct. My point on computer programs that are designed to pass the Turing Test is that most of the ones I know about base their responses on the frequency with which specific words and strings of words are associated in their own data base. So, e.g., if it can do a quick Google Search on anything it has just heard, it can just respond with a sentence constructed from the words most frequently associated with that sentence. This seems like a reasonable basis for saying such machines mimic language use, but don't actually use language.

    ReplyDelete
    Replies
    1. Ah yes, in that case we are in agreement!

      I think this might bring up another interesting question. Is there more than one way to understand language? For instance, using the example of flying, the way that we have engineered the flight of airplanes is much different than how birds fly (i.e. no flapping of the wings, no feathers, etc). Rather than trying to replicate how birds flew, we studied the principles of flight (wings, aerodynamics, lift, etc), and created something that reflected those principles.

      Perhaps this will be the path of strong A.I. in the future? If this is the case, then it will be true that science will not build something that understands language/thinks through the same process that a human brain thinks (though whether it actually can is another issue). However, wouldn't we have to admit that this A.I. does in fact truly understands language and thinks, just as we must that planes do in fact fly?

      Delete
    2. Stephen, I'm inclined to agree with you about this. I think that when we say computers don't think we are always just saying that computers don't think like we do, and i think the comparison with flight is apt. One can easily imagine a bird looking at an airplane and saying "You call that flying? That's not flying. Flying essentially involves feathers and the flapping of wings." The history of anthropocentric explanatory models ain't pretty.

      Delete
    3. Randy, You make some very interesting points here. I wonder, though, when we say computers don't think, if we aren't saying more than that computers don't think like we do. Don't we mean computer don't think like anyone does? And that includes birds.

      Delete
    4. Stephen, I agree with you that there probably is more than one way to think. Birds may do it differently than squirrels or humans. We should be open to that possibility, even for computers. I don’t believe that ONLY brains can cause minds. I do believe in the possibility of artificial intelligence (A.I.) though I think that you’ll never create this on the kind of computers that run Watson and Deep Blue and Microsoft Word programs. However, birds and squirrels and humans all use their “meat” in some unknown but essential way that needs to be figured out before an artificial thinking being gets created.

      My student Hans Baker said in the Phil. 176 class on Thursday 5/1/14 that since computers are made of protons and electrons and the meat is made of protons and electrons, why do I single out meat for special treatment? Well, I agree with him that if a computer were allowed to re-form itself so that they used those protons and electrons the way living organisms do in their meat, and if this computer passed the Turing Test, then I’d agree it was really thinking. I would even agree that if the computer could re-form itself so that it used its protons and electrons to have the same chemistry as water, then the computer water would actually quench the thirst of thirsty animals. The problem is that computers aren’t allowed to re-form themselves, if they are going to be computers in the sense of just symbol processors working on linguistic syntax. Those restricted computers, even if they pass the Turing Test, won’t really be thinking. The Strong A. I. advocates believe otherwise.

      Delete
    5. Prof. Dowden, I think you misunderstood what I meant by my comment in class on this topic. What I meant was that an analogue to the ion gradients which cause the firing of action potentials in neurons is the jumping of quantized packets between the nodes of a CPU. Similarly, I would say the motherboard/neural-network is analogous to the structure of the neuron/brain. The software would be analogous to the DNA/Enzymes in the neurons which govern what the neurons do. I did not mean to say that just because they are made of the same particles that those particles would act similarly to the particles in the brain. I just meant that I don't see a significant enough difference between the chemistry of neurons and the chemistry/physics of a computer to rule out the possibility of the computer performing a similar task.

      I think one serious misunderstanding on the side of the meat-chauvinists is thinking that the software of a successful AI would be a simulation of mind. The software should not be any such thing, I think. The way our brains work isn't anything like the thinking in our "minds", and the software wouldn't be similar to this thinking either. Consciousness seems to be an emergent property of systems which are complex, structured in some relevant way, and governed by rules on a micro-level rather than a macro level. The Chinese room argument treats a Turing machine as something which has a list of rules it follows, but if a Turing machine could work this way, we would likely have been much more successful in our attempts to make one by now. The rules which govern the activity of our brain are on such a micro-level that we can't meaningfully ever say what piece of information a neuron is dealing with. We only know to some degree what communities of neurons are dealing with. The same will be true, I believe, of a successful AI. Noone will be able to tell what piece of information each node of the processing system is dealing with.

      I would also like to note that I am not suggesting that complexity is sufficient for intelligence, nor even complexity with micro-level rules, but it is likely necessary. There are no semantics at the level of neurons/DNA, only at the level of systems. Why should the lack of semantics at the motherboard/software level be believed to show lack of semantics on a neural-network level? I don't think the syntax vs. semantics argument would apply to a successful Turing machine.

      Delete
    6. Sorry, that "Unknown" is me, Hans.

      Delete
    7. Hans,

      We are using different definitions for “Turing machine.” In Turing’s sense of “Turing machine,” we have already created many Turing machines. A Turing machine is just a computer. Turing and others showed that a special “universal” Turing machine can do whatever any other computer can do. In that sense we can buy a Turing machine from the Apple Corporation. There is no Turing machine that has passed the Turing Test, however.

      I did misinterpret your remark about brains and computers both being made of the same stuff so computers can do what human brains can do. I was making the trivial point that both are made of electrons, and all electrons behave the same way whether they are in brains or computers. At a higher level of organizations, the computer parts and the brain parts seem to me not to be all that analogous. I do agree that the program running in the collection of neurons is essential to having a mind, but my main claim is that it takes more than the program to make a mind. It also takes appropriate biochemistry within the neurons. You disagree on this point, because you say, “I don't see a significant enough difference between the chemistry of neurons and the chemistry/physics of a computer to rule out the possibility of the computer performing a similar task.” To help understand this point, ask yourself whether computers could ever digest carrots. I think they can’t because they don’t have the proper biochemistry. Would you reply with, ““I don't see a significant enough difference between the chemistry of carrots and the chemistry/physics of a computer to rule out the possibility of the computer performing a similar task” of digesting carrots?

      You promote functionalism because you say, “Consciousness seems to be an emergent property of systems which are complex, structured in some relevant way.” I would disagree and say that consciousness seems to be an emergent property of systems which are complex, structured in some relevant way and also made of the right stuff that permits it to have this functioning. A computer program that merely shuffles symbols is not made of the right stuff to quench our thirst even if it organizes its symbols so that it symbolizes water.

      I agree that the rules which govern the activity of our brain are on such a micro-level that we usually can't meaningfully ever say what piece of information a neuron is dealing with. We probably will only know to some degree what a full community of neurons are dealing with. I agree that “the same will be true…of a successful AI. No one will be able to tell what piece of information each node of the processing system is dealing with.” Nevertheless, this does not show that meat is irrelevant. The multiple realizability thesis says that the stuff is irrelevant in a successful AI. It could be made of star systems or of light beams and beer cans. I just don’t believe you can know that the stuff is irrelevant.

      I agree with you when you say, “I would also like to note that I am not suggesting that complexity is sufficient for intelligence, nor even complexity with micro-level rules, but it is likely necessary.” Yes, it is necessary but not sufficient.

      I also agree with you when you say, “There are no semantics at the level of neurons/DNA, only at the level of systems.”

      I would never suggest that “the lack of semantics at the motherboard/software level be believed to show lack of semantics on a neural-network level.”

      OK, I hope that helps to clarify our differences.

      Delete
  8. Thanks Brad for putting this classic problem out there for discussion. A major problem with Searle's argument, as I see it, that's been revealed in the intervening years is this: First, Searle's argument is predicated on the assumption that if Searle himself has a subjective feeling that he understands something, as he does when he is speaking his native language to other speakers, that sense of understanding is a reliable indicator that he does understand. We have abundant examples and a mountain of psych research showing that people's reports or feelings about what's going on in their own minds aren't reliable. There are cases where we "understand" by a reasonable externalist account of what it is to understand that are not accompanied by that feeling. And there are cases where we have that feeling, but we clearly do not understand. The feeling we report doesn't map on a real performance, capacity, or the facts. There are many other cases where we now know that we are poor theorists in general about what's going on in our own minds. Even worse, we don't actually have an instantiation of the Chinese Room with Searle in it, claiming, "Look, I am producing Chinese sentences but I don't understand a thing." We have an armchair thought experiment. We have Searle saying that IF we were to set up a Chinese Room, at some distant point in the future when the appropriate research has been done to produce a successful Turning Machine, then that Searle would be able to instantiate the program, and that Searle would not have a feeling of understanding. And we know that because this Searle, today, who is thinking about what that might be like hypothetically, has a strong intuition that if he were in that situation he wouldn't understand what he is doing. What I am getting at is that the so-called success of the Chinese Room argument is predicated on a number of nested assumptions that I think we have good reason to doubt, therefore, I am, at best, agnostic about the conclusion it claims to have shown. And these are worries I have even before we start to talk about the problems that I think are buried in the Systems Reply exchange.

    ReplyDelete
  9. Matt, I like the fact that you are trying to uncover the assumptions in Searle’s Chinese Room Argument.

    I agree with you that sometimes people feel they understand but don’t really understand, and sometimes people don’t feel they understand but nevertheless do. Having the feeling you do understand is a fairly reliable indicator, but of course it is not a perfectly reliable indicator. I’d take Searle at his word that he doesn’t understand Chinese until there is a better reason to believe otherwise than just that if he doesn’t understand Chinese then the Turing-Putnam thesis about Strong A.I. will get refuted.

    I presume you believe that if Searle memorizes the computer program that passes the Turing Test, then he’s a closet Chinese-understander who doesn’t understand himself. If Searle does unconsciously understand Chinese, then you do have an effective way to undermine the Chinese Room Argument.

    Passing the Turing Test in Chinese is a fairly reliable indicator that whatever is in the room understands Chinese, but the Chinese Room Argument convinces me that it’s not a perfectly reliable indicator. If I were to hear that something passed the Turing Test in Chinese, my first guess would be that it really does understand Chinese. But to make sure, I’d ask about the biological features of the contestant. I’d want to look inside the black box. If I were to look in and find no meat but just a fast, well-programmed computer, then I’d doubt the computer is understanding anything whatsoever, even if it does pass the test. Are you confident that meat is irrelevant to understanding?

    Nothing could successfully understand language without having a mind. I just don’t agree that computation alone is sufficient either for language understanding or even for knowing that a certain squiggle shape is a Chinese symbol. Maybe a robot someday will understand language, but this robot will have meat-like biology, unlike current robots. That robot won’t just be a computer program running on any old hardware as proponents of “multiple realizability” believe. The Strong A.I. functionalists are failing to realize that consciousness is a biological phenomenon, analogous to digestion and breathing. As Searle likes to point out, any biological activity can be simulated by a computer program, but a computer simulation of water will never quench the thirst of a real person. This is not an a priori remark; it is a prediction based on empirical evidence, and if you could present enough evidence that I’m wrong, then I’ll change my mind about meat.

    ReplyDelete
  10. Thanks Brad. This is interesting. I have more layers of doubts that leave me agnostic about several things in the described scenario. You said, "I'd take Searle at his word that he doesn't understand Chinese. . . " But this is not what we are doing. We are imagining a far flung, hypothetical scenario with a number of hidden and doubtful assumptions, and then we are asking, "How would Searle respond to the question "Can you understand Chinese?' if he was in this situation?" Searle's normal answers to a question today, in a real situation, like "Do you understand how oil futures work on the stock market?" are often unreliable, I have argued. I maintain that answers to the question, "If you were in this elaborate science fiction scenario at some point in the future, and if you were able to memorize volumes and volumes of computer programming, and if you were able to mimic a machines behavior, and if X and if Y, and if Z, then would you have a strong intuition that you understand Chinese?" are much more unreliable. In effect, we would be "taking him at his word," that his intuitions today about this elaborate situation are reliable indicators of the truth then. I don't think we should do that. If nothing else, we should take note of abundant research now that shows that alleged experts in the stock market, real estate, and in other areas have very high degrees of confidence about their abilities to make predictions and give reliable forecasts, but in fact they do worse than random chance. Their confidence doesn't correlate to their confidence.
    I also think that the description of what would be involved in devising a strong AI system that could successfully pass the Turing Test is grossly over simplified and conceals a number of mistaken presumptions about how such a system would be constructed/programmed that help to fuel the intuitions that we have that seem to corroborate Searle's conclusion. That is, I think the strong intuition we have to agree with Searle is more of an artifact of how the situation has been described than a realistic indicator of what would actually be involved. Parallel processing, back propagating connectionist networks, which are remarkably good at mimicking the behavior of neurons, are radically different than the sort of old school, serial processing code from AI research that Searle was reacting to decades ago. Once we get a more sophisticated idea about and current information on how such a Turing system would be built, the intuitions that fed the Searle argument are considerably diminished, or invalidated altogether. But that's another, big topic.

    ReplyDelete
  11. Matt, From what you've just said I now believe your reasoning about layers of doubt is better than I first realized. Yes, it's a gigantic leap to saying we know what Searle's intuitions would be like if X, Y, and Z.

    Also, I agree that Searle was motivated by looking at 1980s computers rather than looking at more sophisticated computers that we have today. However, wouldn't you agree that today's and tomorrow's computers will still be equivalent to Turing machines of the same logical form that Turing imagined back in 1950? (This is called "Church's Thesis.") If so, Searle could in principle still memorize the Turing machine's program, though not really in practice.

    However, I notice you are avoiding the meat question. John Searle and Paul Chuirchland and Patricia Churchland agree that some kind of meat is needed for language understanding. Mere behavioral indistinguishability from a genuine understander of Chinese in the Chinese Room scenario doesn't imply genuine understanding of Chinese.

    ReplyDelete
    Replies
    1. Gosh, these meat people are really hard for me to relate to. If they don't have a theory about what properties unique to meat are responsible for thinking, then it makes little sense to me that they would suppose meat to be required. To me the multiple realizability of lots of other known emergent properties should be enough to give them pause on inductive grounds. I just don't see how anyone who grasps what a vanishingly small bit of the universe we have ever sampled can put much stock on the purely inductive argument that every unambiguously thinking thing we have ever met was made of meat. If we eventually identify physical properties necessary for mentality that we have strong reasons for thinking can only be produced by carbon chemistry, then I would join the meat camp.

      Delete
  12. Randy, I’m glad you are considering the meat factor in thinking. We have a common goal in wanting science to succeed in explaining thinking. We disagree, though, about the value of inductive arguments. I put a lot of stock in the fact that I’ve looked all around as much as I could (in the vanishingly small part of the universe I can access) and I’ve found a pattern: every one of the thinking things comes with biological features. And the evidence suggests the biology is very important. Shut down the metabolism of a human being and it stops thinking. Cut off the head of kittens, and they stop thinking.

    You don’t “put much stock” in these facts. You say, “I just don't see how anyone who grasps what a vanishingly small bit of the universe we have ever sampled can put much stock on the purely inductive argument that every unambiguously thinking thing we have ever met was made of meat.” If you say that, then I can imagine you also saying, “I just don't see how anyone who grasps what a vanishingly small bit of the universe we have ever sampled can put much stock on the purely inductive argument that every unambiguously living thing we have ever met was made of something that needed water.” Yet astro-biologists put a lot of stock in arguments like this. It guides their research for signs of extraterrestrial life. Similarly, I recommended that researchers who want to understand thinking should pay attention to biology. Philosophers should, too. Thinking is surely more than merely running a program in the brain, isn’t it?

    I agree with you that meat is still a mystery. We don’t have a theory of thinking that indicates how the meat is important, but I believe we know enough about how to build explanatory theories that it is clear that we should incorporate the power of meat into that future theory. If not, we won’t get very far in explaining thinking, and explaining thinking is a common goal we have.

    ReplyDelete
    Replies
    1. Brad, that makes a lot of sense. I like your point about the hypothesis being a legitimate basis for looking for other forms of life. Since we do know that meat makes intelligent life here, it makes sense that it might make it elsewhere under similar conditions as well. You are correct in attributing to me the claim about water and in fact I would say exactly that. I don't think that this position entails that using water as a basis for searching for life is irrational, though. It makes perfect sense given that we know that at least some life requires water and that water exists elsewhere. What I think is that the purely inductive argument gives us very weak reasons for thinking that intelligence and life can't be created in other non meat ways.

      I see the theory that there is something special about meat as continuous with the idea that there is something special about celestial bodies and something special about humans. A commitment to those views made it very hard for us to figure out principles of motion and evolution. The belief that thought is inherently linguistic in nature, and therefore uniquely human, has, until quite recently, seriously retarded our ability to appreciate the general nature of representation and inference.

      By contrast, most of our really important insights into the nature of reality have come from people like Copernicus, Newton, Darwin and Turing whose basic suspicion is that what we are looking at here is not special, that the principles are far more general and admit of far more different ways of doing things then we suspect are possible. To paraphrase Hamlet, there are more ways of doing things then are dreamt of in our philosophy.

      Consider waves. Meat chauvinists, in my view, are in a far worse epistemic position than someone like Albert Michelson who, along with just about everyone else, knew for sure that waves travel through a medium. In the 19th century someone who actually set out to look for waves without a medium would have been a nutjob, a Don Quixote of science. But here we are.

      As you know even better than I, it is becoming much more common to explain the behavior of physical systems, from springs to plants to societies in information-theoretic terms. I think this means that our concept of intelligence is, right now, being thoroughly re-examined, possibly undergoing revolutionary change. and that in a couple of generations it may be a perfectly obvious conceptual truth that, say, trees think. Just like today it is perfectly obvious to most people that animals feel pain. Most people who hear Descartes view about this for the first time think it is appallingly stupid, but of course it wasn't.

      Conceptual revolutions of this kind can be discontinuous in a way that Kuhn struggled to explain with his idea of incommensurability. In denying some of the core conceptual assumptions of previous generations (objects require a force to keep moving, waves require a medium of propagation) we can often be legitimately accused of changing the question, and simply starting to use familiar terms in different ways. But that's how science works.

      The meat chauvinists may end up being right and I think they should work hard to prove they are right by coming up with an actual theory that shows how properties unique to meat generate thinking and intelligent behavior. (I mean, they don't even have a theory yet, far less a tested one.) Again, that's how science works, by brilliant people being convinced that something is the case well in advance of the evidence and then committing their lives to proving it. And if what you are saying is that you belong to that camp, then hats off. But I think most people without a dog in the fight are best advised to avoid any kind of chauvinism and keep their minds open to other possibilities.

      It's way more fun, too.

      Delete
  13. Randy,
    Here’s a new perspective. Let’s think of the situation as if we are funders of scientific research projects. I agree with you that maybe trees will someday be considered to have minds, but until there’s an established theory that they do, it’s just nutty to fund scientific projects whose aim is to look for minds in trees. Similarly, we should be reducing funding for scientists who want to search for water-free life and meat-free minds.

    You said, “We know that at least some life requires water.” However, we know that ALL life we’ve seen requires water, so perhaps that is why I rate the inductive argument as stronger than you. Similarly, we know that ALL the beings who can think that we’ve encountered are made of meat, not just some of them.

    I am not opposed to searching for life that isn’t water-based and searching for minds that aren’t meat-based, but I as a funder of scientific projects, I would give these open-ended searches much less funding than the searches for water-based life and meat-based minds. Turing, meanwhile, would be complaining that I have water-chauvinism and meat-chauvinism. When we go out on interstellar searches for conscious entities on other planets, we ought to look first at the meat-based organisms that are crawling around the planet, and only later pay attention to the planets’ computers and dust as potential sources of mind. If you find a computer, I’d recommend looking for the programmer, not looking inside the computer for a mind.

    I completely agree with your remarks about Copernicus, Newton, and Darwin beings great scientists who challenged assumptions. Turing was a great mathematician and logician, but his idea that meat is irrelevant to mind, isn’t one of his great ideas.

    ReplyDelete
  14. Brad, we don't disagree when you put it in those practical terms, which you know I admire. The only thing I would emphasize, for the sake of all of our readers, is that "All X I have ever seen" is always some X, and and, in fact, a vanishingly small number of them. We don't know how much significance to attach to the range of our experience in this case, but generally speaking I think it's good idea to attach very little to it.

    ReplyDelete
    Replies
    1. Randy, Once again you make very helpful comments by being so clear in what you say. Too many philosophers believe being obscure is needed in order to be profound.

      Anyway, back to the action. Now that we know the universe is so vast, it will always be correct to say that, for any x, we are probably acquainted with only a vanishingly small number of x. Nevertheless, we teach the next generation that ALL electrons have such and such a mass, and that ALL healthy crows fly, and that all x are y. Let me know if you have doubts about whether it is appropriate to teach this to the next generation.

      Delete
  15. Brad, that's a pretty interesting question about pedagogy. I am a Popperian on this issue. It is ok to teach that, but we should teach that what we mean by that is that it has so far resisted our best attempts to falsify it, not that it is unrevisable. And, being a Quinean as well, I would teach the same thing about claims regarded as conceptual truths. What's inappropriate to teach the next generation is that any of our knowledge is unrevisable.

    ReplyDelete
  16. You and I both like much of what Popper and Quine say here. One caution about Popper, though. Popper should quit treating all scientific theories as mere guesses. They may start out as guesses, but after extensive testing they deserve a better status, such as being confirmed or established or verified. Popper has too weird a sense of these latter terms.

    ReplyDelete
  17. Well, I think he agrees with that don't you? Guesses that make high risk, i.e., highly informative predictions are corroborated in his terminology. He thinks it is more rational to trust a corroborated theory than one that is not corroborated or one that has been been disconfirmed.

    ReplyDelete
  18. Randy, Popper is surely correct that a theory that has withstood a lot of testing and is therefore said by him to be highly corroborated is not thereby conclusively verified because it might be falsified by tomorrow’s testing. Nevertheless, Popper is overly cautious.

    He says our most highly corroborated theory in some area is just the best so far. For Popper, our best theory is simply called “provisional,” as it was called before it withstood all that testing. Yes, the theory that the Earth is not flat is provisional, but is it merely provisional? The poor thing can’t get any respect from Popper.

    Suppose you cut your finger. Thankfully your blood soon clots. When scientists first investigated this process they found tiny little dense bodies floating among the red blood cells of humans. The little dense bodies were found clumped together and impeding blood flow out of the human body. These dense bodies were identified using light-based microscopes. Then they were identified via electron microscopes. The two kinds of microscopes work on different physical principles. Here we have different, theoretically independent, means of detection producing the same result—that there are platelets among the red blood cells. For Popper, the hypothesis that there are platelets among the red blood cells would be called corroborated and provisional. But that’s too cautious. The science books go beyond Popper and say that in this way platelets were discovered, and are real, and we know they exist. The platelets deserve this greater respect. The science books give it to them. Popper doesn’t.

    ReplyDelete
  19. Brad, it sounds to me like your difference with Popper is terminological. It doesn't sound like you believe that the shape of the earth's surface or our theory of platelets is absolutely unrevisable, and it doesn't seem to me that he has any problem talking about these theories as highly corroborated..

    If your point is that he should be willing to come out and say that we know such theories to be true, I guess for me it just depends on what we mean by knowledge. If we're ok saying that we know X, though it's possible that X is not the case, I agree with you that there is no reason to deny highly corroborated scientific theories the status of knowledge. .But there is a long philosophical tradition of regarding that statement as an absurdity, and I suspect he wanted to distance himself from that tradition. My own view is that the philosophical use of the term 'knowledge' isn't particularly useful and we're better off just talking about theories in terms of their explanatory power than whether they are true or false.

    My impression is that Popper enjoys quite a bit of respect among philosophically inclined scientists. David Deutsch is explicit in saying that Popper's view of scientific inquiry is the correct one. Richard Feynman, who had no use for philosophers, still
    liked to talk about scientific method and what he said was indistinguishable from what Popper said as, for example, here: http://tinyurl.com/qg9z6ov

    ReplyDelete
  20. Randy,
    Well, I agree with everything you say here. That Feynman film clip is wonderful. I just happen to be in that strange camp of philosophers who are happy to say "I know meat is required for mind, but I might be wrong." When certain epistemologists call that "absurd," I think that perhaps the problem is with their epistemology. Still, like you say, using the word "knowledge" isn't very helpful for doing advanced philosophy or science, though I find that Phil. 4 (critical thinking) students profit from discussions about knowledge.

    ReplyDelete
  21. Shucks, we've arrived at a point of complete agreement. I love that clip. I'm using it my phil science course.

    ReplyDelete
  22. I wanted to add a little input on this topic from the perspective of new AI research. For a good overview of the latest breakthroughs you can check out Professor Hinton’s page from the University of Toronto.

    Dowden, you said: “ I agree with him that if a computer were allowed to re-form itself so that they used those protons and electrons the way living organisms do in their meat, and if this computer passed the Turing Test, then I’d agree it was really thinking.”

    I think you are treating this point to shallowly. What AI researchers and some neuroscientists are looking at is not how meat-brains use and manipulate these electrons and protons, but what function these processes serve. Neurons in the cortex fire “bits” of information: they are either on or off. Importantly (more so than we previously thought), much of this firing is purely random. Some groupings are more strongly connected together than others, and as a whole are more likely to fire if a few of their member neurons fire. There is a lot of interesting biochemistry that is necessary to allow these neuronal connections to form and these firings to occur, but is all that necessary for the end product?

    This is where modern AI research comes in. In the past fews years people have been successfully adapting biologically inspired neural networks to perform amazingly complex classification and regression tasks. Natural language is among these tasks. These networks use electrons and protons moving around computer hardware to make these predictions, but the important thing is how the networks are engineered. Only within the past couple years researchers have been copying the random noise that exists in the firing of the brain in their neural networks. For a long time it was a mystery why evolution would tunnel on a solution that involved random noise and 1’s and 0’s for information processing. Neural networks featuring dropout randomly turn off half of the nodes in the network for each new training example. This is a form of regularization to reduce overfitting and is currently the state of the art. Here is a great video giving more detail about some of these nets: https://www.ipam.ucla.edu/wowzavideo.aspx?vfn=10743.mp4&vfd=gss2012
    and the website with many similar lectures:
    https://www.ipam.ucla.edu/schedule.aspx?pc=gss2012
    Be warned though, you may need to review your old linear algebra textbooks!

    It turns out that the randomness of neuronal firing isn’t just a funny coincidence of biology, but a necessary component to building really large neural networks. Now we think the 1’ and 0’s might be equally important.

    The takeaway should be that the same engineering solutions evolution naturally found can be instantiated in human built mechanisms. Planes don’t fly like birds, but they both use similar principles and engineering solutions (for example,a light skeletal frame). I think you agree that a thinking machine doesn’t necessarily have to think like a human. Machines have access to many orders of magnitude more data and can work with data from more varied sources. This necessitates a different set of engineering features and therefore a different type of brain (I don’t like using the word mind). Much more to say about this topic, unfortunately space is limited!

    ReplyDelete
    Replies
    1. Chase,

      I do not want to suggest that “What AI researchers and some neuroscientists are looking at is not how meat-brains use and manipulate these electrons and protons.” That would be just as silly as trying to cure cancer by studying electron-proton interactions.

      The more important point on which you and I differ is where you say, “There is a lot of interesting biochemistry that is necessary to allow these neuronal connections to form and these firings to occur, but is all that necessary for the end product?” If you agree that it “is necessary,” then how can you question whether it is “all that necessary”? I think what you really mean is that many of the features of the biochemistry are not necessary. I agree. But I still say some features of the biochemistry are necessary in order to get the neuronal connections to perform properly for the purposes of creating intelligence, yet the functionalist says nothing-about-biochemistry is necessary for having A.I. in the computer.

      I don’t disagree with anything you say in your last three paragraphs, except perhaps that it seems they were designed to support the claim that random noise and 1s and 0s and proper programming will be sufficient to create AI even if it is AI on beer cans connected with strings. The biochemistry of the stuff that generates the noise and the 1s and 0s is surely going to be discovered to be important, too.

      Delete
    2. Dowden, I appreciate the clarity you added to some of my points, I should have been more careful in my wording.

      "It seems they were designed to support the claim that random noise and 1s and 0s and proper programming will be sufficient to create AI even if it is AI on beer cans connected with strings."

      I think this is another point we disagree over. I do not hold that the few properties that I mentioned are sufficient to create AI, but if I did have an exhaustive list I would disagree that an AI computer could be built from any old components as long as the relevant symbolic computations were being made. I think the mistake you are making here is equating a theoretical Turing machine with a real computing device. Turing machines can model any computation specifically because they have infinite storage space; this is obviously not achievable in the real world. This means we are more restricted in our automata if we want to perform computations outside of a theoretical environment. A thinking machine could theoretically be modeled by a Turing machine, but it could never practically be modeled. Furthermore, this argument assumes that real computers behave in completely determined ways just as Turing machines do. Everyone familiar with real computers knows that this is not the case. In actual computers there is a surprisingly large amount of randomness caused by all sorts of environmental and internal events and this property increases as the computer grows in size and complexity.

      Secondly, and more on the speculative side, it seems like a necessary component of consciousness is the ability to evaluate time series data. This is something modern AI architectures struggle with and is a very active area of research. A thinking machine would need to evaluate time series data quickly enough to act on it; it seems like the speed at which this would need to occur restricts the types of things that could be used to build an AI system. A beer can computer probably couldn't compute quickly enough, although I'm open to counter evidence.

      "I still say some features of the biochemistry are necessary in order to get the neuronal connections to perform properly for the purposes of creating intelligence."

      Maybe you could say a bit more about specific properties of biochemistry that you think are necessary, but here's some counter-evidence to that theory. Neuronal connections vary in strength and the strength of any connection is at least partly determined by how often the two neurons fire together. This system has been successfully reproduced in artificial neural networks. Biological neural networks have a feedback mechanism; signals are sent back and forth through the same groupings. Back propagation models this mechanism in artificial neural networks. One of the best mathematical models for the firing of neurons is the sigmoid function; this is also one of the best activation functions in artificial neural networks. There is a large degree of randomness in the firing of neurons; dropout adds randomness to artificial neural networks and probably performs a similar type of function (the prevention of over-fitting). The low and mid level representations that the vision system forms (i.e. edge detectors, shape detectors, etc.) are nearly identical to the low and mid level representations that artificial neural networks form. ANN's display transfer learning: you can train a network on image data and it still performs well on language and audio classification without any extra adjustment. ANN's spontaneously learned to reason analogously. Without any explicit programming a neural network answered "France" when given the prompt: "Rome is to Italy as Paris is to ____."

      Delete
  23. Chase, we have many points of agreement. For instance, we agree that a necessary component of consciousness is the ability to evaluate time series data. We agree that more progress on AI will be made via artificial neural networks than by old-style computers. We agree that an adequate explanation of consciousness isn’t going to require appeal to non-physical entities.

    In your posting you say, “I would disagree that an AI computer could be built from any old components as long as the relevant symbolic computations were being made.” I, too, disagree. But this claim that an AI computer could be realized from “any old components” is the heart of functionalism. It is functionalism’s “multiple realizability thesis.” So, the implication is that you, too, agree that functionalism is incorrect.

    We may have a small disagreement about Turing machines. I agree that a universal Turing machine potentially has an infinite amount of memory available. However, an Apple PC can be simulated by a Turing machine with a finite amount of memory. The success of the Turing machine at simulating all possible activity of the Apple PC shows that the Apple PC is still essentially a Turing machine. Do you agree?

    Now, you make a deeper point. You say that even if old Turing-machine-like computers can’t understand what they are doing, maybe more sophisticated machines can. Well, I agree to that, too, because I’m a more sophisticated machine and I can understand what I’m doing. You didn’t mean sophisticated machines that are THAT sophisticated. Your sophisticated machines (would you call them computers?) are not Turing machines that do one computation at a time and that basically shuffle symbols, but are instead, say, artificial neural networks (ANNs) that can learn on their own and aren’t built primarily as symbol manipulators. The ANNS do not need to be specifically programmed to solve analogy problems or recognize human faces; they just learn to do it thanks to their feedback mechanisms and their many past attempts.

    I guess I’m just not as optimistic about ANNs as you are. Unlike artificial networks, real cortical neural networks that we find in squirrel brains and human brains have a very different biochemistry from ANNs. I may be wrong, but I believe you would claim there is evidence that this difference isn’t important. You believe there is suggestive evidence in the fact that “The low and mid level representations that the vision system forms (i.e. edge detectors, shape detectors, etc.) are nearly identical to the low and mid level representations that artificial neural networks form.” I agree that this is suggestive and interesting, but I still would like to challenge the claim. You say that the evidence that ANNs can solve analogy puzzles such as filling in the blank in “Rome is to Italy as Paris is to _____” is suggestive evidence that the biochemistry of ANNs is irrelevant. Well, you do have evidence that it is irrelevant for solving analogy problems, but I still don’t see this as evidence that the biochemistry of ANNs is irrelevant for their understanding what they are doing. IBM’s Watson computer program also can solve analogy problems, but it seems clear that it does not know what it is doing. Is there any reason to think an ANN knows what it is doing? If there were, then you’d have a counterexample to my thesis that a thinking thing’s biochemistry is crucial for knowing what it is doing.

    I would guess that you make the claim I was challenging in the previous paragraph because you hold the following assumption about AI: Having intelligence is having the ability to do lots of the little things that we all agree it takes intelligence for humans to do. These are the little activities of edge detection and face recognition and analogy solving. So, if you build a machine that is successful at all of these “little” activities, then it will thereby be successful at the big one of being able to think. Do you make that assumption?

    ReplyDelete
  24. Here's a recent paper I came across that might be of interest to some of you in this thread:

    "Is Consciousness Computable? Quantifying Integrated Information Using
    Algorithmic Information Theory"

    http://arxiv.org/pdf/1405.0126v1.pdf

    ReplyDelete