What would a robot have to be able to do, or what would it have to be, for us to consider it a sentient being as opposed to a non-sentient automaton?
Please note I am using the term "robot" here in a broad sense, including such obviously sentient (fictional) constructs such as C-3PO of Star Wars fame. I don't consider "robot" and "sentient being" to be mutually exclusive terms. I'm interested in what fundamentally distinguishes sentient beings from automatons that merely mimic sentience.
The other classic paper on this issue is Alan Turing 's "Computing Machinery and Intelligence", from 1950, which articulates what has come to be known as the " Turing Test ". Turing's idea was to set up an experiment. A modern version might use some kind of internet chat program. You are talking with two other "people". One really is a person. The other is a computer. You can talk to them for as long as you like, about whatever you like. Then if you can't tell the difference, Turing says, the computer is intelligent. Obviously, this is, at first blush, what Andrew calls an "epistemological" approach to the problem, but Turing doesn't see it just that way. Let me mention, by the way, that 2012 is also the " Alan Turing Year ", celebrating the 100th anniversary of his birth. Turing had a very interesting, and tragic, life. Not only was he one of the founders of modern computer science, he put his genius to work for the British military during World War II and helped crack the German codes ....
- Log in to post comments