The AskPhilosophers logo.

Mind

What would a robot have to be able to do, or what would it have to be, for us to consider it a sentient being as opposed to a non-sentient automaton? Please note I am using the term "robot" here in a broad sense, including such obviously sentient (fictional) constructs such as C-3PO of Star Wars fame. I don't consider "robot" and "sentient being" to be mutually exclusive terms. I'm interested in what fundamentally distinguishes sentient beings from automatons that merely mimic sentience.
Accepted:
January 3, 2012

Comments

Andrew Pessin
January 13, 2012 (changed January 13, 2012) Permalink

This is a great question, and one with a very long history. There's a key ambiguity in it though, that should be clarified at the start: 'what would it have to be for us to consider it sentient?' might be read metaphysically or epistemologically. To read it metaphysically is to ask what, in fact, is sufficient for the robot to be sentient; to read it epistemologically is to ask what evidence would be sufficient for us, or any third party, to judge that the robot is sentient. The difference is important because it might be that there is some essential feature to sentience, but it is not one which would ever allow us to judge with any confidence/reliability that some creature other than ourselves possesses it. ....

That said, a good starting point for you would be Descartes's Discourse on Method, where he argues (in brief) that the possession of genuine linguistic competence and general rationality are marks of the 'mental', or of 'sentience' broadly construed; he holds that no purely mechanical/physical account could ever explain why a creature demonstrates those properties, and while his account is dated, there's no question that 'language' and 'reason' remain very challenging things even today, for researchers in Artificial Intelligence to instantiate in 'robots.' Then, after Descartes, skip a few centuries and read John Searle's famous and controversial paper, "Minds, Brains, and Programs" originally in the journal Behavioral and Brain Sciences, in 1980 -- which set of a decades-long debate over whether any computer or computer program could ever actually instantiate mental states (as opposed to merely mimic them). If you read that paper, and then google 'responses to Searle's Minds Brains Programs' (or more generally 'responses to Searle's Chinese Room Thought Experiment') you will get plenty for you to chew over as you contemplate your excellent question!

hope that's a useful start --

ap

  • Log in to post comments

Richard Heck
January 14, 2012 (changed January 14, 2012) Permalink

The other classic paper on this issue is Alan Turing's "Computing Machinery and Intelligence", from 1950, which articulates what has come to be known as the "Turing Test". Turing's idea was to set up an experiment. A modern version might use some kind of internet chat program. You are talking with two other "people". One really is a person. The other is a computer. You can talk to them for as long as you like, about whatever you like. Then if you can't tell the difference, Turing says, the computer is intelligent. Obviously, this is, at first blush, what Andrew calls an "epistemological" approach to the problem, but Turing doesn't see it just that way.

Let me mention, by the way, that 2012 is also the "Alan Turing Year", celebrating the 100th anniversary of his birth. Turing had a very interesting, and tragic, life. Not only was he one of the founders of modern computer science, he put his genius to work for the British military during World War II and helped crack the German codes. The tragic part lies in Turing's being prosecuted for homosexuality in 1952 and then being forced to take female hormones as "treatment" instead of being sent to prison. He committed suicide in 1954, at the age of 41.

  • Log in to post comments

Gabriel Segal
April 26, 2012 (changed April 26, 2012) Permalink

Somewhat in line Searle's arguments in "Minds, Brains and Programs" I would say that the key is: original intentionality. Intentionality means something like 'aboutness' or 'representation', in the way that the sentence 'Hesperus is a planet' is about Venus, or represents Venus ('Hesperus' being a name for Venus). In some sense the rings on a tree represent its age: one ring per year. In some sense the written wordforms, the mere physical shapes, 'Hesperus is a planet' represent Venus. But our minds seem to represent things in a much deeper and more fundamental way. The tree rings merely correlate with its age in years. The mere wordforms only represent because we take them to do so. The intentionality of the wordforms is derived from us, whereas the intentionality of our thought that Hesperus is a planet is not derived from anything else: it is original intentionality. I would suggest, as a crude first move, that sentience is intentionality. Searle's thought was that no matter how sophisticated a computer might be, if it was made out of silicon, or a man running around very very fast shifting large numbers of bits of paper around (following a program that was written on a blackboard), it would not be doing anything like genuine thinking or cognition. Suppose the computer was one for making a Chinese meal. It would only be about Chinese food in the derived sense. We could see it as telling us how to cook a meal because we have ways correlating its activity with things we want to know about Chinese food. But it would not in any deeper or more real sense be about, or represent, Chinese food. Its intentionality would be derived, not original. Searle's view was that only a brain or something suitably like a brain could have original intentionality. While I do not myself agree with Searle that we can be sure that the silicon computer or the very fast man with his bits of paper, do not have original intentionality, I do agree that we cannot be sure that they do have it. I don't think we know in virtue of what some physical systms have original intentionality and some do not. But there lies the key to sentience, when we find it.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/4470
© 2005-2025 AskPhilosophers.org