The AskPhilosophers logo.

Mind

If we were able to create a computer that functions exactly like a human brain, when does this "artificial" intelligence stop being artificial? I suppose what I'm trying to say is that if this computer could truly learn, and be programmed in such a way as to develop emotions just as humans do, when does it become real? When is it not right to just plug it out and "kill" it? Many people would, I'm assuming, argue that a computer isn't living, or isn't biological. (As posed in an earlier answer, that's not particularly valid; we all weed our gardens.) It comes down to emotion as far as I'm concerned. I'm finding this question particularly difficult to phrase, and the more I type the more I think that the question is going to come across as all over the place, so I'm going to stop at that and hope for the best! If there is no response I will try again another time.
Accepted:
February 24, 2011

Comments

Saul Traiger
February 25, 2011 (changed February 25, 2011) Permalink

The “artificial” in “artificial intelligence” describes the origin of the form of intelligence, as the result of artifice rather than nature. Both artifacts and natural things are real. However, until the advent of computers, the term “intelligence” was rarely applied to artifacts. Thus, thinking that artifacts can be intelligent involves conceptual change. You embrace this change, and suggest that a computer that functions the way intelligent humans function are indeed intelligent. Many philosophers of mind agree with you. You further suggest that emotion is a necessary component of an intelligent being. This is a bit more controversial. You may have watched Watson, IBM’s computer, beat the world’s best human Jeopardy players recently. Many would be inclined to say that Watson is intelligent, but lacks emotion. Your final, and most provocative claim is that such artificially intelligent entities have moral status, that under some circumstances, it would be wrong to unplug them. (See the termination of HAL 9000, in 2001: A Space Odyssey.) I encourage your to think further about the important distinctions embedded in your question, the distinction between intelligent and non-intelligent things, between natural things and artifacts, the distinction between intelligence and emotion, and the distinction between beings with moral status and those without such status. A good place to start is Alan Turing’s classic 1950 paper, “Computing Machinery and Intelligence.”

  • Log in to post comments

William Rapaport
February 25, 2011 (changed February 25, 2011) Permalink

And a good place to continue (after reading Turing 1950) might be with some of the readings that I have listed on my Philosophy of Computer Science course webpages at: Philosophy of Artificial Intelligence and at Computer Ethics, especially: LaChat, Michael R. (1986), "Artificial Intelligence and Ethics: An Exercise in the Moral Imagination", AI Magazine 7(2): 70-79

  • Log in to post comments
Source URL: https://askphilosophers.org/question/3865
© 2005-2025 AskPhilosophers.org