Sunday, June 14, 2015

Building self aware machines

The public mood toward the prospect of artificial intelligence is dark. Increasingly, people fear the results of creating an intelligence whose abilities will far exceed our own, and who pursues goals that are not compatible with our own. I think resistance is a mistake (and futile) and I think we should be actively striving toward the construction of artificial intelligence.

When we ask “Can a machine be conscious?,” we often miss several important distinctions. With regard to the AI project, we need to distinguish at least between qualitative/phenomenal states, exterior self-modeling, interior self-modeling, information processing, attention, sentience, executive top-down control, self-awareness, and so on. Once we make a number of these distinctions, it becomes clear that we have already created systems with some of these capacities.  Others are not far off, and still others present the biggest challenges to the project. Here I will focus just on two, following Drew McDermott: interior and exterior self-modeling.

A cognitive system has a self-model if it has the capacity to represent, acknowledge, or take account of itself as an object in the world with other objects. Exterior self-modeling requires treating the self solely as a physical, spatial-temporal object among other objects. So you can easily spatially locate yourself in the room, you have a representation of where you are in relation to your mother’s house, or perhaps to the Eiffel Tower. You can also easily locate yourself temporally. You represent Napoleon as an 18th century French Emperor, and you are aware that the segment of time that you occupy is later than the segment of time that he occupied. Children swinging from one bar to another on the playground are employing an exterior self-model, as is a ground squirrel running back to its burrow.

Exterior self-modeling is relatively easy to build into an artificial system compared to many other tasks that face the AI project. Your phone is technologically advanced enough to put itself in a location in space in relationship to other objects with its GPS system. I built a CNC machine in my garage (Computer Numeric Controlled cutting system) that I ”zero” out when I start it up. I designate a location in a three dimensional coordinate system as (0, 0, 0) for the X, Y, and Z axes, then the machine keeps track of where it is in relation to that point as it cuts. When it’s finished, it returns to (0, 0, 0). The system knows where it is in space, at least in the very small segment of space that it is capable of representing (About 36” x 24” x 5”).

Interior self-modeling is the capacity of a system to represent itself as an information processing, epistemic, representational agent. That is, a system has an interior self-model if it represents the state of its own informational, cognitive capacities. Loosely, it is knowing what you know and knowing what you don’t know. It is a system that is able to locate the state of its own information about the world within a range of possible states. When you recognize that watching too much Fox News might be contributing to your being negative about President Obama, you are employing an interior self-model. When you resolve to not make a decision about which car to buy until you’ve done some more research, or when you wait until after the debates to decide which candidate to vote for, you are exercising your interior self-model. You have located yourself as a thinking, believing, judging agent within a range of possible information states. Making decisions requires information. Making good decisions requires being able to assess how much information you have, how good it is, and how much more (or less) you need or how much better you need it to be in order to decide within the tolerances of your margins of error.

So in order to endow an artificial cognitive system with an interior self-model, we must build it to model itself as an information system similar to how we’d build it to model itself in space and time. Hypothetically, a system can have no information, or it can have all of the information. And the information it has can be poor quality, with a high likelihood of being false, or it can be high quality, with a high likelihood of being true. Those two dimensions are like a spatial-temporal framework, and the system must be able to locate its own information state within that range of possibilities. Then the system, if we want it to make good decisions, must be able to recognize the difference between the state it is in and the minimally acceptable information state it should be in. Then, ideally, we’d build it with the tools to close that gap.

Imagine a doctor who is presented with a patient with an unfamiliar set of symptoms. Recognizing that she doesn’t have enough information to diagnose the problem, she does a literature search so that she can responsibly address it. Now imagine an artificial system with reliable decisions heuristics that recognizes the adequacy or inadequacy of its information base, and then does a medical literature review that is far more comprehensive, consistent, and discerning than a human doctor can perform. At the first level, our AI system needs to be able to compile and process information that will produce a decision. But at the second level, our AI system must be able to judge its own fitness for making that decision and rectify the information state short coming if there is one. 

The ability to represent itself as an epistemic agent in this fashion is one of the most important and interesting ways to flesh out the notion of  something being “self-aware.” By carefully analyzing other senses of "machine consciousness" we may come to see that there is no single, deeply mysterious and inherently insoluble problem. Rather, there are many different, fascinating questions that can be framed and answered in computational terms and which will yield to computational methods.


Bostrom, Nick.  Superintelligence: Paths, Dangers, Strategies, Oxford University Press 2014.

McDermott, Drew. “Artificial Intelligence and Consciousness,” The Cambridge Handbook of Consciousness, 117-150. Zelazo, Moscovitch, and Thompson, eds. 2007.


Matt McCormick
Department of Philosophy
Sacramento State

8 comments:

  1. Matt, thanks for the post. It's thought provoking... and I want to probe your implied sentiment that we shouldn't be afraid of creating beings which are not only like us in the relevant ways (self-conscious, self-aware, etc.), but far better at everything than we are.

    I'm not saying we shouldn't, but I think we need to be doing a lot more reflection on the implications of it if we do. As a political philosopher also concerned about ethics, I can’t help but wonder what would the status of these beings be in terms of participants in the world we have created and into which they have been created? If they're better than us at doctoring, perhaps they should replace us in that capacity, perhaps also serve better as legislators and judges, perhaps ultimately they should also be free to have a choice in what they do and become. I'm not inclined toward speciesism or AI-phobia, but it would not be far-fetched to imagine that in creating such beings, we create our own eventual extinction. The vast majority of us just would not be able to compete, if they were the moral and political equals of us in this world of ours. And perhaps, they would come to see even more clearly than we do, that we do not deserve to compete successfully, since we're already destroying far too much of value on this planet than is tolerable by any fair assessment.

    Not that I think humanity is all that much worth preserving as a species in the grand scheme of things, but we should at least consider the possibility that what we create might be not merely be solving some interesting puzzles about consciousness or mind, but that we create, in good evolutionary (and Marxist) fashion, the agent of our own (perhaps merited) destruction. Thoughts?

    ReplyDelete
    Replies
    1. Chris, this is exactly the big question. Nick Bostrom makes a good case that there's a lot to worry about here. His short answer is that instead of trying to enumerate all of the "do nots" and then hard programming them into one of these systems, rather we should be seeking to inculcate the sum of the right human values into a potentially super intelligent AI. How we do that is a huge complicated project, and it's where philosophers, not computer scientists, are going to make the most useful contribution.

      Delete
  2. Nice post Matt. The analysis of interior self-modeling reminds me so much of Locke's definition of personhood: "A person is an intelligent thinking being that can know itself as itself the same thinking thing in different times and places."

    Regarding exterior self-modeling, it seems like there is a really important distinction between having a model that allows us to locate ourselves in the physical environment, and the ability to use that model reflectively. For example, it seems like a dog does exterior self-modeling that allows it to be intuitively aware of where other objects are relative to itself, as just about any animal that moves about must. But a dog doesn't seem able to actually represent itself, to itself, as an object in its environment. For example, if you tie a dog to a tree with a long rope it has no ability to avoid wrapping the rope around other nearby objects and still less to rectify the situation. Part of this is that they are stupid, but it also seems to me that it is because they lack the capacity to reflect on the map itself, something we experience as looking at things from a different perspective. I don't know, but I bet there are things that a human rock climber can do that a mountain goat can't, simply because she can think about herself in this way. Also, the basic ability to test plans counterfactually by imagining ourselves doing things rather than just finding out by doing them seems to depend on this ability.

    Where do you place this ability? I would call it metacognition, but that is usually though of as meaning awareness of your own thought processes. It seems like metacognitive awareness of our spatio-temporal relations. We don't need to be thinking of ourselves as a thinking thing to be able to do this.

    Also, I note that you use the word 'physical' to describe exterior self-modeling, but I wonder if we really need to say that. Maybe we do, but it seems like this is what a simulated entity would need as well to navigate through its world.

    ReplyDelete
  3. Philosophically, your blog post makes three big points: (1) Don’t be so afraid of A.I.; (2) there is no single, deeply mysterious hard problem of consciousness; (3) solving a consciousness problem is a matter of better computation. I’m OK with the first two, but am less sure about the third. It seems to me that there could be some situations where progress in A.I. will be made non-computationally but rather by, say, modeling some aspect of consciousness chemically. How do you know you can rule out non-computational methods of A.I.?

    ReplyDelete
  4. Brad, while I'm using the parlance of computationalism in a lot of places here, I don't think anything I've said commits me to a strict or comprehensive computationalism. I wouldn't presume to rule out non-computational approaches. But at the end of the day, chemical processes, or parallel distributed process networks, and other seemingly more "organic" phenomena in the brain can be and have been modeled computationally, so I'm receptive to it. The contribution that philosophers seem to be able to make here is to help divide and conquer the various aspects, functions, abilities, and capacities of the mind, and we are making fast headway in AI research on modeling and understanding lots of those. The Drew McDermott article that I reference provides a good summary of many of these projects, and makes a really interesting case for even being able to generate phenomenal conscious states, qualia, computationally.

    ReplyDelete
  5. I, too, like the divide and conquer approach to the problem of A.I., and I agree with you that researchers are making headway in A.I. research, but I still like John Searle’s point that computation probably isn’t all there is to being intelligent, and that the proper stuff might be required, such as certain chemicals. To illustrate the point, there is a computational model of water, but the model run on a computer won’t quench your thirst; only the right stuff satisfying the model will quench your thirst.

    ReplyDelete
    Replies
    1. Brad, I don't know the details of Searle's argument about chemicals. But it might turn out that given the way the periodic table works, and given the demands of speed, information processing, and so on, the best way to build such a system is to build it with certain kinds of chemical structures. I had a PhD friend in chemistry who worked for Daimler Chrysler on the large electric car battery problem. He said, "There is only so much we can do with the periodic table and building batteries" in response to my questions about building electric cars that would go further, faster, or run longer. So no doubt it will turn out that some physical configurations of some elements and compounds will achieve the functional goals of AI better than others. And human brains achieve some of those goals. But human brains are also horribly kludgey systems that have lots of inefficiencies, redundancies, design flaws, glitches, and limitations. I think the difference between what the blind, stumbling steps of the evolutionary process produced and what deliberate planning and careful design will be huge. I don't think the computational model of water point doesn't illustrate the point you want to make, however. It's too disanalogous. That's a bit like arguing that the water pump on your Ford Explorer isn't actually a water pump because it won't fit on your Toyota 4 Runner. Different systems, different needs, different problems being solved.

      Delete
    2. Matt, you said that “human brains are also horribly kludgey systems that have lots of inefficiencies, redundancies, design flaws, glitches, and limitations. I think the difference between what the blind, stumbling steps of the evolutionary process produced and what deliberate planning and careful design will be huge.” I agree with all these points. However, I still have a problem with your complaint in the last four sentences about quenching thirst and disanalogy. Suppose we are building an AI robot and we want to achieve the sub-goal of building into it a system that uses water to quench its thirst. My point is that in order to successfully quench its thirst we designers may need to ensure that the robot is made of the right chemicals and not simply ensure that the robot has some efficient computational sub-routine running on a silicon and steel mini-computer inside the robot. Does that seem reasonable to you?

      Delete