The AskPhilosophers logo.

Knowledge
Mind

Are machines able to have knowledge?
Accepted:
May 30, 2006

Comments

Peter Lipton
June 3, 2006 (changed June 3, 2006) Permalink

For a machine to have knowledge, it looks like it has to be able to have beliefs, since when you know something you also believe it (though not conversely). And for a machine to have beliefs, it has to be able to form representations of how the world might be. So the answer to your question will depend in part on whether machines can form representations. This is a hotly debated question in the philosophy of mind, for the case where the machines in question are computers. The two most famous arguments are due to Alan Turing and to John Searle.

Turing argued that there could be a computer that is able to engage in an extended intelligent conversation (by email perhaps) so good that it fools people into thinking it is a person, and that such a computer ought to be taken to have representational states. Searle argued that since we know how computers (traditional ones, anyway) actually create their end of the conversation, and that this involves only registering the electronic equivalent of the sytax or grammar or shape of sentences and not what those sentences mean, computers can understand neither what they are hearing nor what they are saying.

If Searle is right, it looks like computers can not have knowledge; but if Turing is right, then maybe they can. (For the article on the Turing Test in the Stanford Encyclopedia of Philosophy, click here; for the article on the Chinese Room, click here.)

  • Log in to post comments

Mark Sprevak
June 5, 2006 (changed June 5, 2006) Permalink

I agree with Peter's response, and I'd like to pick up on the possibility that the machines in question are not computers.

Although it is not clear what computation is, it seems plausible that not all machines are computers. A claim that such non-computational machines can have knowledge would escape Turing's or Searle's arguments. One might argue that human beings are such machines: we work in mechanical ways, we have knowledge, but we are more than mere computers. John Searle has a mechanistic, non-computational, view along these lines.

A potential challenge that such a view faces is to explain what this broader sense of 'mechanical' means. It must mean something different from 'performs a computation', but one might be reluctant to broaden the notion so far that it applies it to all possible systems: that would render it trivially true that machines can understand. Finding an intermediate ground is not obvious.

  • Log in to post comments

Louise Antony
June 27, 2006 (changed June 27, 2006) Permalink

Clearly, machines can process information. For the machine to have knowledge, however, this information has to be information for the machine – the machine would have to understand the information it processes. What would that involve?

In the first place, the states or events in the machine that store or process the information (including, for example, data bases and the contents of memory registers) would have to be richly integrated with all the other states of the machine, and particularly with the machine’s input and output states, analogously to the way in which our thoughts and memories are integrated with our perceptions and motor commands. This is a functional requirement on machine understanding.

The second requirement is that the input states that supply the information be properly related to the states of affairs in the world the information is about. For human beings, the input states are perceptions, and what a visual perception "means" – what it is about – is determined by lawful relations between the physical structure of the object under view, and the patterns of firing in the retinal cells receiving the signal. In contrast, in currently existing computers, the inputs, which are strings in an arbitary symbol system, only have meaning in the sense that, to which we, the users, assign meaning.

Turing's infamous test -- what Peter Lipton refers to above -- clearly does not test for the relevant condition. It gives both false positives and false negatives: false positives because it could be passed by a radio-operated animatronic doll, and false negatives, because it would be failed by any minded creature who happened to be paralyzed and unable to make any behavioral response. The right way to consider the question is to look at the overall behavior of the machine – or, as it might be, the alien lifeform – and see if the best explanation of that behavior is that its inner workings conform to the conditions given above.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/1201
© 2005-2025 AskPhilosophers.org