Is John Searle's Chinese Room parable a fundamental proof that computers do not have consciousness?

Searle is actually primarily concerned with intentionality, not with consciousness - that is, he takes up the question of whether the computer fundamentally understands the meanings of the outputs that it is producing. By way of the Chinese Room thought experiment, he takes himself to have shown that computers (or at least computers that proceed by way of symbol manipulation) do not have intentionality.

The nub of Searle's provocative argument is the claim that if you are given only symbols, without being told their meaning, plus rules for manipulating those symbols to generate an output of symbols, where those rules never talk about meaning, then you are never going to learn the meaning of those symbols, however intelligent the output seems to those who do know their meaning. Computers (at least traditional ones) seem like that: they are symbol manipulating machines who never work with meanings, only 'shapes' (or patterns of electrical impulse). The argument has considerable force, but it raises an obvious question: what could we have that computers don't that enables us to wring meaning out of the mere sound waves and electromagnetic radiation that our senses detect?

Read another response by Amy Kind, Peter Lipton
Read another response about Consciousness