tag:blogger.com,1999:blog-2492055523235356445.post2392582434696760815..comments2023-08-29T00:45:54.243-07:00Comments on - the dance of reason: Could a computer ever have a mind?Sac State Philosophyhttp://www.blogger.com/profile/17963066908030437925noreply@blogger.comBlogger43125tag:blogger.com,1999:blog-2492055523235356445.post-71569389741748854552014-05-11T11:44:02.744-07:002014-05-11T11:44:02.744-07:00Here's a recent paper I came across that might...Here's a recent paper I came across that might be of interest to some of you in this thread:<br /><br />"Is Consciousness Computable? Quantifying Integrated Information Using<br />Algorithmic Information Theory"<br /><br />http://arxiv.org/pdf/1405.0126v1.pdf<br />M Andrewshttps://www.blogger.com/profile/09297412026441148669noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-36192091496434116982014-05-08T22:35:18.779-07:002014-05-08T22:35:18.779-07:00Chase, we have many points of agreement. For insta...Chase, we have many points of agreement. For instance, we agree that a necessary component of consciousness is the ability to evaluate time series data. We agree that more progress on AI will be made via artificial neural networks than by old-style computers. We agree that an adequate explanation of consciousness isn’t going to require appeal to non-physical entities.<br /><br />In your posting you say, “I would disagree that an AI computer could be built from any old components as long as the relevant symbolic computations were being made.” I, too, disagree. But this claim that an AI computer could be realized from “any old components” is the heart of functionalism. It is functionalism’s “multiple realizability thesis.” So, the implication is that you, too, agree that functionalism is incorrect.<br /><br />We may have a small disagreement about Turing machines. I agree that a universal Turing machine potentially has an infinite amount of memory available. However, an Apple PC can be simulated by a Turing machine with a finite amount of memory. The success of the Turing machine at simulating all possible activity of the Apple PC shows that the Apple PC is still essentially a Turing machine. Do you agree?<br /><br />Now, you make a deeper point. You say that even if old Turing-machine-like computers can’t understand what they are doing, maybe more sophisticated machines can. Well, I agree to that, too, because I’m a more sophisticated machine and I can understand what I’m doing. You didn’t mean sophisticated machines that are THAT sophisticated. Your sophisticated machines (would you call them computers?) are not Turing machines that do one computation at a time and that basically shuffle symbols, but are instead, say, artificial neural networks (ANNs) that can learn on their own and aren’t built primarily as symbol manipulators. The ANNS do not need to be specifically programmed to solve analogy problems or recognize human faces; they just learn to do it thanks to their feedback mechanisms and their many past attempts.<br /><br />I guess I’m just not as optimistic about ANNs as you are. Unlike artificial networks, real cortical neural networks that we find in squirrel brains and human brains have a very different biochemistry from ANNs. I may be wrong, but I believe you would claim there is evidence that this difference isn’t important. You believe there is suggestive evidence in the fact that “The low and mid level representations that the vision system forms (i.e. edge detectors, shape detectors, etc.) are nearly identical to the low and mid level representations that artificial neural networks form.” I agree that this is suggestive and interesting, but I still would like to challenge the claim. You say that the evidence that ANNs can solve analogy puzzles such as filling in the blank in “Rome is to Italy as Paris is to _____” is suggestive evidence that the biochemistry of ANNs is irrelevant. Well, you do have evidence that it is irrelevant for solving analogy problems, but I still don’t see this as evidence that the biochemistry of ANNs is irrelevant for their understanding what they are doing. IBM’s Watson computer program also can solve analogy problems, but it seems clear that it does not know what it is doing. Is there any reason to think an ANN knows what it is doing? If there were, then you’d have a counterexample to my thesis that a thinking thing’s biochemistry is crucial for knowing what it is doing.<br /><br />I would guess that you make the claim I was challenging in the previous paragraph because you hold the following assumption about AI: Having intelligence is having the ability to do lots of the little things that we all agree it takes intelligence for humans to do. These are the little activities of edge detection and face recognition and analogy solving. So, if you build a machine that is successful at all of these “little” activities, then it will thereby be successful at the big one of being able to think. Do you make that assumption?<br />Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-29071794521818909332014-05-08T12:44:39.539-07:002014-05-08T12:44:39.539-07:00Dowden, I appreciate the clarity you added to some...Dowden, I appreciate the clarity you added to some of my points, I should have been more careful in my wording.<br /><br />"It seems they were designed to support the claim that random noise and 1s and 0s and proper programming will be sufficient to create AI even if it is AI on beer cans connected with strings."<br /><br />I think this is another point we disagree over. I do not hold that the few properties that I mentioned are sufficient to create AI, but if I did have an exhaustive list I would disagree that an AI computer could be built from any old components as long as the relevant symbolic computations were being made. I think the mistake you are making here is equating a theoretical Turing machine with a real computing device. Turing machines can model any computation specifically because they have infinite storage space; this is obviously not achievable in the real world. This means we are more restricted in our automata if we want to perform computations outside of a theoretical environment. A thinking machine could theoretically be modeled by a Turing machine, but it could never practically be modeled. Furthermore, this argument assumes that real computers behave in completely determined ways just as Turing machines do. Everyone familiar with real computers knows that this is not the case. In actual computers there is a surprisingly large amount of randomness caused by all sorts of environmental and internal events and this property increases as the computer grows in size and complexity.<br /><br />Secondly, and more on the speculative side, it seems like a necessary component of consciousness is the ability to evaluate time series data. This is something modern AI architectures struggle with and is a very active area of research. A thinking machine would need to evaluate time series data quickly enough to act on it; it seems like the speed at which this would need to occur restricts the types of things that could be used to build an AI system. A beer can computer probably couldn't compute quickly enough, although I'm open to counter evidence. <br /><br />"I still say some features of the biochemistry are necessary in order to get the neuronal connections to perform properly for the purposes of creating intelligence."<br /><br />Maybe you could say a bit more about specific properties of biochemistry that you think are necessary, but here's some counter-evidence to that theory. Neuronal connections vary in strength and the strength of any connection is at least partly determined by how often the two neurons fire together. This system has been successfully reproduced in artificial neural networks. Biological neural networks have a feedback mechanism; signals are sent back and forth through the same groupings. Back propagation models this mechanism in artificial neural networks. One of the best mathematical models for the firing of neurons is the sigmoid function; this is also one of the best activation functions in artificial neural networks. There is a large degree of randomness in the firing of neurons; dropout adds randomness to artificial neural networks and probably performs a similar type of function (the prevention of over-fitting). The low and mid level representations that the vision system forms (i.e. edge detectors, shape detectors, etc.) are nearly identical to the low and mid level representations that artificial neural networks form. ANN's display transfer learning: you can train a network on image data and it still performs well on language and audio classification without any extra adjustment. ANN's spontaneously learned to reason analogously. Without any explicit programming a neural network answered "France" when given the prompt: "Rome is to Italy as Paris is to ____."Chase Van Ettenhttps://www.blogger.com/profile/18171278171715489964noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-54908821834016633602014-05-07T21:07:36.793-07:002014-05-07T21:07:36.793-07:00Chase,
I do not want to suggest that “What AI res...Chase,<br /><br />I do not want to suggest that “What AI researchers and some neuroscientists are looking at is not how meat-brains use and manipulate these electrons and protons.” That would be just as silly as trying to cure cancer by studying electron-proton interactions. <br /><br />The more important point on which you and I differ is where you say, “There is a lot of interesting biochemistry that is necessary to allow these neuronal connections to form and these firings to occur, but is all that necessary for the end product?” If you agree that it “is necessary,” then how can you question whether it is “all that necessary”? I think what you really mean is that many of the features of the biochemistry are not necessary. I agree. But I still say some features of the biochemistry are necessary in order to get the neuronal connections to perform properly for the purposes of creating intelligence, yet the functionalist says nothing-about-biochemistry is necessary for having A.I. in the computer.<br /><br />I don’t disagree with anything you say in your last three paragraphs, except perhaps that it seems they were designed to support the claim that random noise and 1s and 0s and proper programming will be sufficient to create AI even if it is AI on beer cans connected with strings. The biochemistry of the stuff that generates the noise and the 1s and 0s is surely going to be discovered to be important, too.Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-63491185958833404562014-05-07T20:50:10.189-07:002014-05-07T20:50:10.189-07:00Hans,
We are using different definitions for “Tur...Hans,<br /><br />We are using different definitions for “Turing machine.” In Turing’s sense of “Turing machine,” we have already created many Turing machines. A Turing machine is just a computer. Turing and others showed that a special “universal” Turing machine can do whatever any other computer can do. In that sense we can buy a Turing machine from the Apple Corporation. There is no Turing machine that has passed the Turing Test, however. <br /><br />I did misinterpret your remark about brains and computers both being made of the same stuff so computers can do what human brains can do. I was making the trivial point that both are made of electrons, and all electrons behave the same way whether they are in brains or computers. At a higher level of organizations, the computer parts and the brain parts seem to me not to be all that analogous. I do agree that the program running in the collection of neurons is essential to having a mind, but my main claim is that it takes more than the program to make a mind. It also takes appropriate biochemistry within the neurons. You disagree on this point, because you say, “I don't see a significant enough difference between the chemistry of neurons and the chemistry/physics of a computer to rule out the possibility of the computer performing a similar task.” To help understand this point, ask yourself whether computers could ever digest carrots. I think they can’t because they don’t have the proper biochemistry. Would you reply with, ““I don't see a significant enough difference between the chemistry of carrots and the chemistry/physics of a computer to rule out the possibility of the computer performing a similar task” of digesting carrots?<br /><br />You promote functionalism because you say, “Consciousness seems to be an emergent property of systems which are complex, structured in some relevant way.” I would disagree and say that consciousness seems to be an emergent property of systems which are complex, structured in some relevant way and also made of the right stuff that permits it to have this functioning. A computer program that merely shuffles symbols is not made of the right stuff to quench our thirst even if it organizes its symbols so that it symbolizes water.<br /><br />I agree that the rules which govern the activity of our brain are on such a micro-level that we usually can't meaningfully ever say what piece of information a neuron is dealing with. We probably will only know to some degree what a full community of neurons are dealing with. I agree that “the same will be true…of a successful AI. No one will be able to tell what piece of information each node of the processing system is dealing with.” Nevertheless, this does not show that meat is irrelevant. The multiple realizability thesis says that the stuff is irrelevant in a successful AI. It could be made of star systems or of light beams and beer cans. I just don’t believe you can know that the stuff is irrelevant.<br /><br />I agree with you when you say, “I would also like to note that I am not suggesting that complexity is sufficient for intelligence, nor even complexity with micro-level rules, but it is likely necessary.” Yes, it is necessary but not sufficient.<br /><br />I also agree with you when you say, “There are no semantics at the level of neurons/DNA, only at the level of systems.”<br /><br />I would never suggest that “the lack of semantics at the motherboard/software level be believed to show lack of semantics on a neural-network level.”<br /><br />OK, I hope that helps to clarify our differences.<br />Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-47688170272546310792014-05-07T14:56:30.040-07:002014-05-07T14:56:30.040-07:00Sorry, that "Unknown" is me, Hans.Sorry, that "Unknown" is me, Hans.Hans Bakerhttps://www.blogger.com/profile/13500674913716314871noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-25550663010443326182014-05-07T13:58:30.199-07:002014-05-07T13:58:30.199-07:00Prof. Dowden, I think you misunderstood what I mea...Prof. Dowden, I think you misunderstood what I meant by my comment in class on this topic. What I meant was that an analogue to the ion gradients which cause the firing of action potentials in neurons is the jumping of quantized packets between the nodes of a CPU. Similarly, I would say the motherboard/neural-network is analogous to the structure of the neuron/brain. The software would be analogous to the DNA/Enzymes in the neurons which govern what the neurons do. I did not mean to say that just because they are made of the same particles that those particles would act similarly to the particles in the brain. I just meant that I don't see a significant enough difference between the chemistry of neurons and the chemistry/physics of a computer to rule out the possibility of the computer performing a similar task.<br /><br />I think one serious misunderstanding on the side of the meat-chauvinists is thinking that the software of a successful AI would be a simulation of mind. The software should not be any such thing, I think. The way our brains work isn't anything like the thinking in our "minds", and the software wouldn't be similar to this thinking either. Consciousness seems to be an emergent property of systems which are complex, structured in some relevant way, and governed by rules on a micro-level rather than a macro level. The Chinese room argument treats a Turing machine as something which has a list of rules it follows, but if a Turing machine could work this way, we would likely have been much more successful in our attempts to make one by now. The rules which govern the activity of our brain are on such a micro-level that we can't meaningfully ever say what piece of information a neuron is dealing with. We only know to some degree what communities of neurons are dealing with. The same will be true, I believe, of a successful AI. Noone will be able to tell what piece of information each node of the processing system is dealing with.<br /><br />I would also like to note that I am not suggesting that complexity is sufficient for intelligence, nor even complexity with micro-level rules, but it is likely necessary. There are no semantics at the level of neurons/DNA, only at the level of systems. Why should the lack of semantics at the motherboard/software level be believed to show lack of semantics on a neural-network level? I don't think the syntax vs. semantics argument would apply to a successful Turing machine.Hans Bakerhttps://www.blogger.com/profile/13500674913716314871noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-17442049806303563762014-05-07T00:37:52.571-07:002014-05-07T00:37:52.571-07:00I wanted to add a little input on this topic from ...I wanted to add a little input on this topic from the perspective of new AI research. For a good overview of the latest breakthroughs you can check out Professor Hinton’s page from the University of Toronto.<br /><br />Dowden, you said: “ I agree with him that if a computer were allowed to re-form itself so that they used those protons and electrons the way living organisms do in their meat, and if this computer passed the Turing Test, then I’d agree it was really thinking.” <br /><br />I think you are treating this point to shallowly. What AI researchers and some neuroscientists are looking at is not how meat-brains use and manipulate these electrons and protons, but what function these processes serve. Neurons in the cortex fire “bits” of information: they are either on or off. Importantly (more so than we previously thought), much of this firing is purely random. Some groupings are more strongly connected together than others, and as a whole are more likely to fire if a few of their member neurons fire. There is a lot of interesting biochemistry that is necessary to allow these neuronal connections to form and these firings to occur, but is all that necessary for the end product? <br /><br />This is where modern AI research comes in. In the past fews years people have been successfully adapting biologically inspired neural networks to perform amazingly complex classification and regression tasks. Natural language is among these tasks. These networks use electrons and protons moving around computer hardware to make these predictions, but the important thing is how the networks are engineered. Only within the past couple years researchers have been copying the random noise that exists in the firing of the brain in their neural networks. For a long time it was a mystery why evolution would tunnel on a solution that involved random noise and 1’s and 0’s for information processing. Neural networks featuring dropout randomly turn off half of the nodes in the network for each new training example. This is a form of regularization to reduce overfitting and is currently the state of the art. Here is a great video giving more detail about some of these nets: https://www.ipam.ucla.edu/wowzavideo.aspx?vfn=10743.mp4&vfd=gss2012<br />and the website with many similar lectures:<br />https://www.ipam.ucla.edu/schedule.aspx?pc=gss2012<br />Be warned though, you may need to review your old linear algebra textbooks!<br /><br />It turns out that the randomness of neuronal firing isn’t just a funny coincidence of biology, but a necessary component to building really large neural networks. Now we think the 1’ and 0’s might be equally important. <br /><br />The takeaway should be that the same engineering solutions evolution naturally found can be instantiated in human built mechanisms. Planes don’t fly like birds, but they both use similar principles and engineering solutions (for example,a light skeletal frame). I think you agree that a thinking machine doesn’t necessarily have to think like a human. Machines have access to many orders of magnitude more data and can work with data from more varied sources. This necessitates a different set of engineering features and therefore a different type of brain (I don’t like using the word mind). Much more to say about this topic, unfortunately space is limited!Chase Van Ettenhttps://www.blogger.com/profile/18171278171715489964noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-12609350565451716142014-05-05T11:40:51.070-07:002014-05-05T11:40:51.070-07:00Shucks, we've arrived at a point of complete a...Shucks, we've arrived at a point of complete agreement. I love that clip. I'm using it my phil science course.G. Randolph Mayeshttps://www.blogger.com/profile/18285281186698499962noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-62287886592316160632014-05-04T21:23:08.851-07:002014-05-04T21:23:08.851-07:00Randy,
Well, I agree with everything you say here....Randy,<br />Well, I agree with everything you say here. That Feynman film clip is wonderful. I just happen to be in that strange camp of philosophers who are happy to say "I know meat is required for mind, but I might be wrong." When certain epistemologists call that "absurd," I think that perhaps the problem is with their epistemology. Still, like you say, using the word "knowledge" isn't very helpful for doing advanced philosophy or science, though I find that Phil. 4 (critical thinking) students profit from discussions about knowledge.Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-50768249682812783162014-05-04T19:36:52.963-07:002014-05-04T19:36:52.963-07:00Brad, it sounds to me like your difference with Po...Brad, it sounds to me like your difference with Popper is terminological. It doesn't sound like you believe that the shape of the earth's surface or our theory of platelets is absolutely unrevisable, and it doesn't seem to me that he has any problem talking about these theories as highly corroborated.. <br /><br />If your point is that he should be willing to come out and say that we know such theories to be true, I guess for me it just depends on what we mean by knowledge. If we're ok saying that we know X, though it's possible that X is not the case, I agree with you that there is no reason to deny highly corroborated scientific theories the status of knowledge. .But there is a long philosophical tradition of regarding that statement as an absurdity, and I suspect he wanted to distance himself from that tradition. My own view is that the philosophical use of the term 'knowledge' isn't particularly useful and we're better off just talking about theories in terms of their explanatory power than whether they are true or false.<br /><br />My impression is that Popper enjoys quite a bit of respect among philosophically inclined scientists. David Deutsch is explicit in saying that Popper's view of scientific inquiry is the correct one. Richard Feynman, who had no use for philosophers, still <br />liked to talk about scientific method and what he said was indistinguishable from what Popper said as, for example, here: http://tinyurl.com/qg9z6ovG. Randolph Mayeshttps://www.blogger.com/profile/18285281186698499962noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-67000477929042739182014-05-04T10:34:35.065-07:002014-05-04T10:34:35.065-07:00Randy, Popper is surely correct that a theory that...Randy, Popper is surely correct that a theory that has withstood a lot of testing and is therefore said by him to be highly corroborated is not thereby conclusively verified because it might be falsified by tomorrow’s testing. Nevertheless, Popper is overly cautious. <br /><br />He says our most highly corroborated theory in some area is just the best so far. For Popper, our best theory is simply called “provisional,” as it was called before it withstood all that testing. Yes, the theory that the Earth is not flat is provisional, but is it merely provisional? The poor thing can’t get any respect from Popper.<br /><br />Suppose you cut your finger. Thankfully your blood soon clots. When scientists first investigated this process they found tiny little dense bodies floating among the red blood cells of humans. The little dense bodies were found clumped together and impeding blood flow out of the human body. These dense bodies were identified using light-based microscopes. Then they were identified via electron microscopes. The two kinds of microscopes work on different physical principles. Here we have different, theoretically independent, means of detection producing the same result—that there are platelets among the red blood cells. For Popper, the hypothesis that there are platelets among the red blood cells would be called corroborated and provisional. But that’s too cautious. The science books go beyond Popper and say that in this way platelets were discovered, and are real, and we know they exist. The platelets deserve this greater respect. The science books give it to them. Popper doesn’t.<br />Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-12632818761865355142014-05-03T14:34:15.142-07:002014-05-03T14:34:15.142-07:00Well, I think he agrees with that don't you? G...Well, I think he agrees with that don't you? Guesses that make high risk, i.e., highly informative predictions are corroborated in his terminology. He thinks it is more rational to trust a corroborated theory than one that is not corroborated or one that has been been disconfirmed.G. Randolph Mayeshttps://www.blogger.com/profile/18285281186698499962noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-30427977376795019892014-05-03T11:15:36.244-07:002014-05-03T11:15:36.244-07:00You and I both like much of what Popper and Quine ...You and I both like much of what Popper and Quine say here. One caution about Popper, though. Popper should quit treating all scientific theories as mere guesses. They may start out as guesses, but after extensive testing they deserve a better status, such as being confirmed or established or verified. Popper has too weird a sense of these latter terms. Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-56325055835029959212014-05-03T09:51:55.017-07:002014-05-03T09:51:55.017-07:00Brad, that's a pretty interesting question abo...Brad, that's a pretty interesting question about pedagogy. I am a Popperian on this issue. It is ok to teach that, but we should teach that what we mean by that is that it has so far resisted our best attempts to falsify it, not that it is unrevisable. And, being a Quinean as well, I would teach the same thing about claims regarded as conceptual truths. What's inappropriate to teach the next generation is that any of our knowledge is unrevisable. G. Randolph Mayeshttps://www.blogger.com/profile/18285281186698499962noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-13121766735761372772014-05-03T09:34:09.779-07:002014-05-03T09:34:09.779-07:00Stephen, I agree with you that there probably is m...Stephen, I agree with you that there probably is more than one way to think. Birds may do it differently than squirrels or humans. We should be open to that possibility, even for computers. I don’t believe that ONLY brains can cause minds. I do believe in the possibility of artificial intelligence (A.I.) though I think that you’ll never create this on the kind of computers that run Watson and Deep Blue and Microsoft Word programs. However, birds and squirrels and humans all use their “meat” in some unknown but essential way that needs to be figured out before an artificial thinking being gets created. <br /><br />My student Hans Baker said in the Phil. 176 class on Thursday 5/1/14 that since computers are made of protons and electrons and the meat is made of protons and electrons, why do I single out meat for special treatment? Well, I agree with him that if a computer were allowed to re-form itself so that they used those protons and electrons the way living organisms do in their meat, and if this computer passed the Turing Test, then I’d agree it was really thinking. I would even agree that if the computer could re-form itself so that it used its protons and electrons to have the same chemistry as water, then the computer water would actually quench the thirst of thirsty animals. The problem is that computers aren’t allowed to re-form themselves, if they are going to be computers in the sense of just symbol processors working on linguistic syntax. Those restricted computers, even if they pass the Turing Test, won’t really be thinking. The Strong A. I. advocates believe otherwise.<br />Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-56394253494865092042014-05-02T23:19:51.698-07:002014-05-02T23:19:51.698-07:00Randy, Once again you make very helpful comments b...Randy, Once again you make very helpful comments by being so clear in what you say. Too many philosophers believe being obscure is needed in order to be profound. <br /><br />Anyway, back to the action. Now that we know the universe is so vast, it will always be correct to say that, for any x, we are probably acquainted with only a vanishingly small number of x. Nevertheless, we teach the next generation that ALL electrons have such and such a mass, and that ALL healthy crows fly, and that all x are y. Let me know if you have doubts about whether it is appropriate to teach this to the next generation.Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-61027055892377853242014-05-02T23:10:44.826-07:002014-05-02T23:10:44.826-07:00Randy, You make some very interesting points here....Randy, You make some very interesting points here. I wonder, though, when we say computers don't think, if we aren't saying more than that computers don't think like we do. Don't we mean computer don't think like anyone does? And that includes birds.Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-73973986797214481822014-05-02T16:17:39.215-07:002014-05-02T16:17:39.215-07:00Brad, we don't disagree when you put it in tho...Brad, we don't disagree when you put it in those practical terms, which you know I admire. The only thing I would emphasize, for the sake of all of our readers, is that "All X I have ever seen" is always some X, and and, in fact, a vanishingly small number of them. We don't know how much significance to attach to the range of our experience in this case, but generally speaking I think it's good idea to attach very little to it. G. Randolph Mayeshttps://www.blogger.com/profile/18285281186698499962noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-75985466546298267992014-05-02T14:50:00.879-07:002014-05-02T14:50:00.879-07:00Randy,
Here’s a new perspective. Let’s think of th...Randy,<br />Here’s a new perspective. Let’s think of the situation as if we are funders of scientific research projects. I agree with you that maybe trees will someday be considered to have minds, but until there’s an established theory that they do, it’s just nutty to fund scientific projects whose aim is to look for minds in trees. Similarly, we should be reducing funding for scientists who want to search for water-free life and meat-free minds.<br /><br />You said, “We know that at least some life requires water.” However, we know that ALL life we’ve seen requires water, so perhaps that is why I rate the inductive argument as stronger than you. Similarly, we know that ALL the beings who can think that we’ve encountered are made of meat, not just some of them. <br /><br />I am not opposed to searching for life that isn’t water-based and searching for minds that aren’t meat-based, but I as a funder of scientific projects, I would give these open-ended searches much less funding than the searches for water-based life and meat-based minds. Turing, meanwhile, would be complaining that I have water-chauvinism and meat-chauvinism. When we go out on interstellar searches for conscious entities on other planets, we ought to look first at the meat-based organisms that are crawling around the planet, and only later pay attention to the planets’ computers and dust as potential sources of mind. If you find a computer, I’d recommend looking for the programmer, not looking inside the computer for a mind. <br /><br />I completely agree with your remarks about Copernicus, Newton, and Darwin beings great scientists who challenged assumptions. Turing was a great mathematician and logician, but his idea that meat is irrelevant to mind, isn’t one of his great ideas. <br />Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-25090960440042391712014-05-02T09:29:16.469-07:002014-05-02T09:29:16.469-07:00Brad, that makes a lot of sense. I like your point...Brad, that makes a lot of sense. I like your point about the hypothesis being a legitimate basis for looking for other forms of life. Since we do know that meat makes intelligent life here, it makes sense that it might make it elsewhere under similar conditions as well. You are correct in attributing to me the claim about water and in fact I would say exactly that. I don't think that this position entails that using water as a basis for searching for life is irrational, though. It makes perfect sense given that we know that at least some life requires water and that water exists elsewhere. What I think is that the purely inductive argument gives us very weak reasons for thinking that intelligence and life can't be created in other non meat ways. <br /><br />I see the theory that there is something special about meat as continuous with the idea that there is something special about celestial bodies and something special about humans. A commitment to those views made it very hard for us to figure out principles of motion and evolution. The belief that thought is inherently linguistic in nature, and therefore uniquely human, has, until quite recently, seriously retarded our ability to appreciate the general nature of representation and inference. <br /><br />By contrast, most of our really important insights into the nature of reality have come from people like Copernicus, Newton, Darwin and Turing whose basic suspicion is that what we are looking at here is not special, that the principles are far more general and admit of far more different ways of doing things then we suspect are possible. To paraphrase Hamlet, there are more ways of doing things then are dreamt of in our philosophy.<br /><br />Consider waves. Meat chauvinists, in my view, are in a far worse epistemic position than someone like Albert Michelson who, along with just about everyone else, knew for sure that waves travel through a medium. In the 19th century someone who actually set out to look for waves without a medium would have been a nutjob, a Don Quixote of science. But here we are. <br /><br />As you know even better than I, it is becoming much more common to explain the behavior of physical systems, from springs to plants to societies in information-theoretic terms. I think this means that our concept of intelligence is, right now, being thoroughly re-examined, possibly undergoing revolutionary change. and that in a couple of generations it may be a perfectly obvious conceptual truth that, say, trees think. Just like today it is perfectly obvious to most people that animals feel pain. Most people who hear Descartes view about this for the first time think it is appallingly stupid, but of course it wasn't.<br /><br />Conceptual revolutions of this kind can be discontinuous in a way that Kuhn struggled to explain with his idea of incommensurability. In denying some of the core conceptual assumptions of previous generations (objects require a force to keep moving, waves require a medium of propagation) we can often be legitimately accused of changing the question, and simply starting to use familiar terms in different ways. But that's how science works.<br /><br />The meat chauvinists may end up being right and I think they should work hard to prove they are right by coming up with an actual theory that shows how properties unique to meat generate thinking and intelligent behavior. (I mean, they don't even have a theory yet, far less a tested one.) Again, that's how science works, by brilliant people being convinced that something is the case well in advance of the evidence and then committing their lives to proving it. And if what you are saying is that you belong to that camp, then hats off. But I think most people without a dog in the fight are best advised to avoid any kind of chauvinism and keep their minds open to other possibilities.<br /><br />It's way more fun, too.G. Randolph Mayeshttps://www.blogger.com/profile/18285281186698499962noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-64646899143962497122014-05-01T22:50:36.853-07:002014-05-01T22:50:36.853-07:00Randy, I’m glad you are considering the meat facto...Randy, I’m glad you are considering the meat factor in thinking. We have a common goal in wanting science to succeed in explaining thinking. We disagree, though, about the value of inductive arguments. I put a lot of stock in the fact that I’ve looked all around as much as I could (in the vanishingly small part of the universe I can access) and I’ve found a pattern: every one of the thinking things comes with biological features. And the evidence suggests the biology is very important. Shut down the metabolism of a human being and it stops thinking. Cut off the head of kittens, and they stop thinking. <br /><br />You don’t “put much stock” in these facts. You say, “I just don't see how anyone who grasps what a vanishingly small bit of the universe we have ever sampled can put much stock on the purely inductive argument that every unambiguously thinking thing we have ever met was made of meat.” If you say that, then I can imagine you also saying, “I just don't see how anyone who grasps what a vanishingly small bit of the universe we have ever sampled can put much stock on the purely inductive argument that every unambiguously living thing we have ever met was made of something that needed water.” Yet astro-biologists put a lot of stock in arguments like this. It guides their research for signs of extraterrestrial life. Similarly, I recommended that researchers who want to understand thinking should pay attention to biology. Philosophers should, too. Thinking is surely more than merely running a program in the brain, isn’t it?<br /><br />I agree with you that meat is still a mystery. We don’t have a theory of thinking that indicates how the meat is important, but I believe we know enough about how to build explanatory theories that it is clear that we should incorporate the power of meat into that future theory. If not, we won’t get very far in explaining thinking, and explaining thinking is a common goal we have. <br />Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-41648809206860556122014-05-01T15:27:04.572-07:002014-05-01T15:27:04.572-07:00Gosh, these meat people are really hard for me to ...Gosh, these meat people are really hard for me to relate to. If they don't have a theory about what properties unique to meat are responsible for thinking, then it makes little sense to me that they would suppose meat to be required. To me the multiple realizability of lots of other known emergent properties should be enough to give them pause on inductive grounds. I just don't see how anyone who grasps what a vanishingly small bit of the universe we have ever sampled can put much stock on the purely inductive argument that every unambiguously thinking thing we have ever met was made of meat. If we eventually identify physical properties necessary for mentality that we have strong reasons for thinking can only be produced by carbon chemistry, then I would join the meat camp. G. Randolph Mayeshttps://www.blogger.com/profile/18285281186698499962noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-24082789576684347872014-05-01T12:35:10.071-07:002014-05-01T12:35:10.071-07:00Matt, From what you've just said I now believe...Matt, From what you've just said I now believe your reasoning about layers of doubt is better than I first realized. Yes, it's a gigantic leap to saying we know what Searle's intuitions would be like if X, Y, and Z. <br /><br />Also, I agree that Searle was motivated by looking at 1980s computers rather than looking at more sophisticated computers that we have today. However, wouldn't you agree that today's and tomorrow's computers will still be equivalent to Turing machines of the same logical form that Turing imagined back in 1950? (This is called "Church's Thesis.") If so, Searle could in principle still memorize the Turing machine's program, though not really in practice. <br /><br />However, I notice you are avoiding the meat question. John Searle and Paul Chuirchland and Patricia Churchland agree that some kind of meat is needed for language understanding. Mere behavioral indistinguishability from a genuine understander of Chinese in the Chinese Room scenario doesn't imply genuine understanding of Chinese. Bradley Dowdenhttps://www.blogger.com/profile/03398822652849338607noreply@blogger.comtag:blogger.com,1999:blog-2492055523235356445.post-22369030803932317422014-05-01T10:27:19.544-07:002014-05-01T10:27:19.544-07:00Thanks Brad. This is interesting. I have more la...Thanks Brad. This is interesting. I have more layers of doubts that leave me agnostic about several things in the described scenario. You said, "I'd take Searle at his word that he doesn't understand Chinese. . . " But this is not what we are doing. We are imagining a far flung, hypothetical scenario with a number of hidden and doubtful assumptions, and then we are asking, "How would Searle respond to the question "Can you understand Chinese?' if he was in this situation?" Searle's normal answers to a question today, in a real situation, like "Do you understand how oil futures work on the stock market?" are often unreliable, I have argued. I maintain that answers to the question, "If you were in this elaborate science fiction scenario at some point in the future, and if you were able to memorize volumes and volumes of computer programming, and if you were able to mimic a machines behavior, and if X and if Y, and if Z, then would you have a strong intuition that you understand Chinese?" are much more unreliable. In effect, we would be "taking him at his word," that his intuitions today about this elaborate situation are reliable indicators of the truth then. I don't think we should do that. If nothing else, we should take note of abundant research now that shows that alleged experts in the stock market, real estate, and in other areas have very high degrees of confidence about their abilities to make predictions and give reliable forecasts, but in fact they do worse than random chance. Their confidence doesn't correlate to their confidence. <br />I also think that the description of what would be involved in devising a strong AI system that could successfully pass the Turing Test is grossly over simplified and conceals a number of mistaken presumptions about how such a system would be constructed/programmed that help to fuel the intuitions that we have that seem to corroborate Searle's conclusion. That is, I think the strong intuition we have to agree with Searle is more of an artifact of how the situation has been described than a realistic indicator of what would actually be involved. Parallel processing, back propagating connectionist networks, which are remarkably good at mimicking the behavior of neurons, are radically different than the sort of old school, serial processing code from AI research that Searle was reacting to decades ago. Once we get a more sophisticated idea about and current information on how such a Turing system would be built, the intuitions that fed the Searle argument are considerably diminished, or invalidated altogether. But that's another, big topic. Matt McCormickhttps://www.blogger.com/profile/17071078570021986664noreply@blogger.com