The AskPhilosophers logo.

Mind

Why are people so skeptical about the notion that a sufficiently advanced computer program could replicate human intelligence (meaning free will insofar as humans have it; motivation and creativity; comparable problem-solving and communicative capacities; etc.)? If humans are intelligent in the way we are because of the way our brains are built, than a computer could be constructed that replicates the structure of our brains (incorporating fuzzy logic, neural networks, chemical analogs, etc). Worst comes to absolute worst, a sufficiently powerful molecular simulator could run a full simulation of a human brain or human body, down to each individual atom. So there doesn't seem to be anything inherent in the physicality of humans that makes it impossible to build machines with our intelligence, since we can replicate physical structures in machines easily enough. If, however, humans are intelligent for reasons that do not have anything to do with the physical structure of our brains or bodies - if there is some immaterial reason for consciousness, free will or other aspects of our intelligence - than we're essentially talking about souls. And if souls don't just supervene on physical phenomena (which is the entire nature of this fork of the problem - if they did supervene, we'd be back at the first point), then why shouldn't machines, too, be able have souls? Maybe they already do. The only way to escape this and continue to assert that machines could never possess human intelligence is to say that there is a god, or a group of gods, who decide what gets to have souls and what doesn't, and machines aren't on the list. But outside of theistic circles, this argument can't be expected to carry any weight for as long as people are skeptical about theism in general. So what leads so many people to believe that machines could never replicate a human intelligence?
Accepted:
May 18, 2011

Comments

Eddy Nahmias
May 20, 2011 (changed May 20, 2011) Permalink

You have some philosophy questions in here and some psychology questions. The philosophical questions are about (1) whether a machine could ever replicate all human behavior (i.e., pass a "complete Turing Test"), and (2) whether such complete replication of behavior would entail that the machine actually had the mental states that accompany such behavior in humans (i.e., whether a machine's (or an alien!) passing such a complete Turing Test means that it is conscious, self-aware, intelligent, free, etc.). There's a ton to be said here, but my own view is that the answers you suggest are the right ones--namely, that there is no in principle reason that a machine (such as an incredibly complex computer in an incredibly complex robot) could not replicate all human behavior, and that if it did, we would have just as good reason to believe that the machine had a mind (is conscious, intelligent, etc.) as we do to believe other humans have minds. I think there may be severe practical limitations to building such machines, but I think that dualists and John Searle are mistaken to think such functional replication would not require real duplication of the mental properties we have.

So, that brings us to the psychological questions: why do lots of people have so much trouble agreeing with the positive answers you and I are inclined to give to the two questions above? I think you touch on one of the reasons: people's religious beliefs lead them to think that God must give you a soul to give you a conscious mind, free will, etc. Turing himself wondered why an all-powerful God would be limited in his ability to give souls to non-humans or machines. Another reason is that the machines we actually interact with do not yet come close to replicating human behavior. So, it's easy to think that mechanisms simply can't do what we do. Indeed, it's hard for us to imagine that our mechanistic brains could explain consciousness, free will, etc., in part because we have no theory to explain it and in part because we have no models for how mental properties can be composed of material properties.

However, having said all that, I actually think our psychology is such that we are happy to ascribe mental states to metal things. Witness C3PO and R2D2, not to mention all the robots from sci-fi films that look and act like humans. I think that most people can easily imagine machines that behave just like humans, because they do imagine it when they watch these movies (read these stories, etc.). And given our 'theory of mind' system which works to automatically ascribe mental states to other people, animals, etc. that act in certain ways, I think we would have a very hard time not ascribing conscious beliefs, desires, intentions, and intelligence to a robot that looked human and acted human. We might try to hold onto a theoretical conviction that the robot isn't really conscious or intelligent, but that conviction would be betrayed by actual experiences and interactions with the robot. Next time you watch Star Wars, AI, Bladerunner, I Robot, etc. (or even 2001 or Moon where the computers just talk and don't act), try as hard as you can to experience the machines as completely dark on the inside, as completely lacking any thoughts or emotions or intelligence. It's not easy to do. So, my answer to the psychological question is that people only think they believe that machines could never have minds because they have a philosophical or religious theory that tells them to think that, but they won't really think that once they experience such machines in action.

  • Log in to post comments

Allen Stairs
May 30, 2011 (changed May 30, 2011) Permalink

My colleague and I disagree somewhat here, though perhaps on everything essential to your question, we agree.

We all agree that in principle the right kind of "machine" could be every bit as conscious, free, etc.as you and I. And Prof. Nahmias may well be right when he says that if a robot of the C3PO sort acted enough like us, we'd have a very hard time not thinking of it as conscious. I even agree with my co-panelist that people's religious beliefs and the relatively crude character of our actual gadgets may be part of the reason why many people don't think a machine could be conscious.

So where's the residual disagreement? It's on a point that may not be essential, given the way you pose your question. Prof. Nahmias thinks that replicating the functional character of the mind would give us reason enough to think the resulting thing was conscious. I'm not inclined to agree. But that has nothing to do with belief in souls (I don't believe in them and don't even think I have any serious idea what they're supposed to be) nor with the fact that the computers we have are primitive compared to full-fledged people. Interestingly, Prof. Nahmias himself actually identifies -- and agrees with -- the sticking point for folks like me. As he puts it, "we have no theory to explain [how our brains could produce consciousness] and in part because we have no models for how mental properties can be composed of material properties."

Now I don't take this to show that matter appropriately arranged can't be conscious. In fact, I believe that we are just such matter. That is, I agree with folk who think that somehow, I know not how, the right physical goings on make for consciousness. But I don't think a purely functional story will do. And it's not just because I don't know how it would work, but because it seems clear to me that a functional story alone doesn't have the resources.

All this is to say that I take what's often called the "explanatory gap" very seriously. I stay in the materialist cam because there's enough we don't know about matter that I'm cheerfully willing to believe that if we knew more, we might have an explanation for consciousness. As a fall-back, I'm quite willing to go along with Colin McGinn's "Mysterianism": it's matter doing its thing that makes us conscious, but we aren't wired to understand how. But it seems clear to me that not only do we not understand how a purely functional story could fill the gap; we understand enough to know that it couldn't.

On this point I'm cheerfully willing to agree to disagree with Prof. Nahmias; I hope he's willing to do likewise. My point isn't to convince you that he's mistaken, but rather to note that for at least some claims about how matter and mind are related, there are reasons for doubt of a different sort than the ones Prof. Nahmias highlights, though reasons that his own further remarks point to.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/4061
© 2005-2025 AskPhilosophers.org