The AskPhilosophers logo.

Ethics

Generally speaking, we don't consider it unethical to harm artificial "beings" such as plush toys or robots (or if we do, we consider it property damage or vandalism, not actual violence). At what point, though, would this change? Say a robot was invented that, from the outside, looked and behaved just like a person, even though it was actually a robot with advanced systems and programming. Would it be unethical to harm the robot? Where would the line be between a lifelike robot and, say, a human clone grown in a vat? When does damage to an inanimate object become violence against something capable of suffering?
Accepted:
September 22, 2011

Comments

Allen Stairs
September 23, 2011 (changed September 23, 2011) Permalink

It's an interesting question, but I'm going to turn your last sentence into my answer: it becomes violence when whatever we're dealing with is not an inanimate object, but is capable of suffering.

Could a robot fit that description? It could if its wiring, programming, detection systems and whatever else needs to be mentioned make it able to suffer; more generally, if the thing is sentient. It's a fair guess that whatever the full story, plush toys won't make the cut. Just what that would take is both controversial and in any case hard to say for sure. But if *we* are capable of suffering because of the way our physical bits fit together, then at least in principle, an artificially made thing could have bits fitting together in the right sort of way. The fact that it would be "programmed" isn't a problem. After all, there's a good deal about the way our brains work that we might as well count as programming by way of our brain's Bauplan and the ways we've bumped up against the world.

So the in-principle answer is easy: a bit too crudely, what matters is whether, one way or another, the thing can feel. The details are the hard part. Philosophers can contribute, but it's a nut that will only be cracked with a lot of work from a lot of disciplines.

--

A footnote: since the details are so hard, we have a second question: when would it be right to act as though the thing is sentient, even if we aren't sure? I doubt that there's any one criterion. If the thing's inner workings were a lot like ours, that might be sufficient. And even if we weren't sure what the inner workings were, if something acted very much like a sentient creature, that might well be reason to err on the side of moral caution. In particular, a highly human-like robot might well get the benefit of the doubt.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/4309?page=0
© 2005-2025 AskPhilosophers.org