The AskPhilosophers logo.

Consciousness

There are some strong arguments that if a computer appears to possess intelligence similar to a human's, that we must assume it too has self-awareness. Additionally, one could make a strong case that lesser animals have self-awareness, because they have the same type of brain as humans (just in a less sophisticated form.) My question is this: if we assume that a) computers of seemingly human intelligence are self-aware, and b) that animals of lesser brains are self-aware, must we logically conclude that computers of lesser "intelligence" are also self-aware? In other words, are all computers self aware? Is my toaster self-aware?
Accepted:
March 22, 2008

Comments

Gabriel Segal
April 12, 2008 (changed April 12, 2008) Permalink

If a computer appears to possess intelligence, then we need to consider why it appears so. One reason might be that it is intelligent. Another might be that has been constructed to appear intelligent and is a good fake. There are in fact a lot of programs that seem to be like that: good fakes - in particular, ELIZA, designed by Joseph Weizenbaum in the 1960s, and others inspired by it. These are basically tin-pot little boxes of tricks that are very effective at giving answers that appear to be intelligent.

Lesser animals have brains that resemble ours in some ways, but not others. We don't yet know which aspects of our neurology give us self awareness. So we are not in a position to tell whether lesser animals are self-aware by comparing their brains to ours.

Do you think that it is programming or neurology that gives rise to self-awareness? If it's the former, then do you think that a very very very simple program would give rise to self awareness? If its the latter, then do you think your toaster has neurones?

I'd suggest that an animal with a very simple brain isn't self aware, and the same goes for a computer with a very simple program.

  • Log in to post comments

Jonathan Westphal
November 6, 2008 (changed November 6, 2008) Permalink

Why should the possession of intelligence (whatever we mean by this, but say it means winning chess games against the world chess champion, winning bridge games with bad partners against the world bridge champions, issuing correct diagnoses for car repairs, predicting stock market fluctuations, analyzing individual psychology, and so on) require consciousness? We know that when Kasparov played Deep Blue he "sensed" a "weird" and "alien" kind of consciousness - or said and thought he did. I have the same thing with my very complicated telephone handset - it is against me, spitefully, deliberately and consciously. If we allow that playing chess well involves intelligence, then Deep Blue or Deep Fritz or Shredder show the following thing: intelligence does not require consciousness. If we deny consciousness to the systems, then your question does not arise at all, because (a) is false. (I have used "consciousness", but "self-awareness" implies much more, including I think the critique of elements of behaviour.)

  • Log in to post comments
Source URL: https://askphilosophers.org/question/2053
© 2005-2025 AskPhilosophers.org