"Nothing" is online at http://www.nothing.com/Heath.html
I have a question about "solved" games, and the significance of games to artificial intelligence. I take it games provide one way to assess artificial intelligence: if a computer is able to win at a certain game, such as chess, this provides evidence that the computer is intelligent.
Suppose that in the future scientists manage to solve chess, and write an algorithm to play chess according to this solution. By hypothesis, then, a computer running this algorithm wins every game whenever possible. Would we conclude on this basis that the computer is intelligent? I have an intuition that intelligence cannot be reduced to any such algorithm, however complex. But that seems quite strange in a way, because it suggests that imperfect play might somehow demonstrate greater intelligence or creativity than perfect play.
[If the notion of "solving" chess is problematic, another approach is to consider a computer which plays by exhaustively computing every possible sequence of moves. This is unfeasible with...
Update: An interesting article about one of my computer science colleagues on the subject of cheating in chess and touching on the nature of "intelligence" in chess just appeared in Chess Life magazine; the link is here
The ability to play a game such as chess intelligently is certainly one partial measure of "intelligence", but an agent that could only play very good chess would hardly be considered "intelligent" in any general sense. So I don't think that a winning chess program would be "intelligent". There is also the question of how the program plays chess. Deep Blue and other computer chess programs usually do well by brute-force search through a game tree, not by simulating/mimicking/using the kinds of "intelligent" pattern-recognition behaviors that professional human chess players use. But some might argue that that doesn't matter: If the external behavior is judged to be "intelligent", then perhaps it shouldn't matter how it is internally accomplished. For some illuminating remarks on this issue (not in the context of games, however), take a look at: Dennett, Daniel (2009), "Darwin's 'Strange Inversion of Reasoning' ", Proc. Nat'l. Academy of Science 106, suppl. 1 (16 June): 10061-10065 ...
When I look at the room I'm sitting in, I am consciously aware of it as existing outside my body and head. So, for example, I can walk towards the opposite wall and I appear to get closer to it until I reach out and touch it. Now I understand that light is being reflected off a wall, travelling across a room, entering my eyes and this process creates nervous impulses. (In fact a physics would correctly point out that the photons that hit my retina are not even the same as the photons 'reflected' by any object). I understand that these impulses are processed in various parts of my brain, some unconsciously but eventually a mental "schema" representing the room is created. I also understand that there are other processes going on in my brain that create my awareness of different types of "self"s, that continually shift my awareness and that attempt to always produce a self-consistent view of myself and the world. However, my question is not about these (well not directly!).
My question is simply how does...
I don't have a good answer for you, but I can point you to a very readable book that discusses this issue among many others: O'R egan, J. Kevin (2011), Why Red Doesn't Sound Like a Bell: Understanding the Feel of Consciousness (Oxford: Oxford University Press). The book's supplementary website has an answer to your question at Other Twisted Issues, and there's an article-length version of the book in: O'R egan, J. Kevin (2012), "How to Build a Robot that Is Conscious and Feels", Minds and Machines 22(2) (Summer): 117-136, as well as a video and a transcript of the author's talk. I don't agree with everything O'R egan says, but he fully understands the issues and has interesting things to say.
If we assume that both computers and the human mind are merely physical, does it follow that a sufficiently advanced computer could do anything that a human brain could do?
As Richard points out, logically, no, it does not follow. Just because two things are both (merely) physical, it does not follow that one of them can do anything that the other can do, not even if both of the (merely) physical things are brains. My pencil is a physical thing, but it can't do everything that my brain can. A cat's brain is physical, but it can't do everything that mine can. (Of course, mine can't do everything a cat's brain can either: I don't usually land on my feet when I jump from a height, and I'm pretty bad at catching mice.) But I think your question really is simply whether a sufficiently advanced computer can do anything that a human brain can. Even so, we need to be a bit more precise. By "anything", I'm guessing that you really mean "anything cognitive"; so, I think your real question is a version of: Can computers think? Philosophers, cognitive scientists, and computer scientists disagree on the answer to that question. I think that one of the best ways to think...
Who are some modern philosophers that argue for either dualism or the idea that mind is a nonphysical substance?
Here's another contemporary philosopher you might want to look into: Galen Strawson-- "I take physicalism to be the view that every real, concrete phenomenon in the universe is physical. …[O]ne thing is absolutely clear. You're…not a real physicalist, if you deny the existence of the phenomenon whose existence is more certain than the existence of anything else: experience, 'consciousness', conscious experience, 'phenomenology', experiential 'what-it's-likeness', feeling, sensation, explicit conscious thought as we have it and know it at almost every waking moment. … [E]xperiential phenomena 'just are' physical, so that there is a lot more to neurons than physics and neurophysiology record…." (Strawson, Galen (2006), Realistic Monism , in A. Freeman (ed.), Consciousness and Its Place in Nature (Exeter: Imprint Academic))
Does the Turing test, the attempt to verify the proposition "Machines can think" through an 'imitation game', come down to a confusion over "like" and "identical with"? i.e can I say the following "If it is like x is thinking, therefore what x is doing is identical to thinking"?
That's one interpretation, but there are many others. My favorite interpretation focuses on this passage in Turing's classic 1950 essay, "Computing Machinery and Intelligence" (Mind 59:433-460): I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. Of course, that century ended around the year 2000, and Turing's predicted "alteration" hasn't yet happened. But that's beside the point. Turing's claim, according to this passage, is that, if computers (better: computational cognitive agents or robots) pass Turing tests, then we will eventually change our beliefs about what it means to think (we will generalize the notion so that it applies to computational cognitive agents and robots as well as humans), and we will change the way we use words like 'think' (in much the same way that we have generalized what it means to fly or to be a computer,...
I recently had a colonoscopy under an anesthetic that caused complete amnesia. An observer could see I was in extreme pain during the procedure yet I have no recollection. How does a philosopher think about the pain I experienced but do not recall?
Daniel Dennett discussed a fictional drug that he called an "amnestic" that allows you to feel pain, but paralyzes you so that you don't exhibit pain behavior, and leaves you with amnesia. Pleasant, no? For the details and his philosophical analysis, read: Dennett, Daniel C. (1978), "Why You Can't Make a Computer that Feels Pain", Synthese 38(3) (July): 415-456; reprinted in his Brainstorms: Philosophical Essays on Mind and Psychology (Montgomery, VT: Bradford Books (now Cambridge, MA: MIT Press), 1978): 190-229.
If we were able to create a computer that functions exactly like a human brain, when does this "artificial" intelligence stop being artificial? I suppose what I'm trying to say is that if this computer could truly learn, and be programmed in such a way as to develop emotions just as humans do, when does it become real? When is it not right to just plug it out and "kill" it?
Many people would, I'm assuming, argue that a computer isn't living, or isn't biological. (As posed in an earlier answer, that's not particularly valid; we all weed our gardens.) It comes down to emotion as far as I'm concerned.
I'm finding this question particularly difficult to phrase, and the more I type the more I think that the question is going to come across as all over the place, so I'm going to stop at that and hope for the best! If there is no response I will try again another time.
And a good place to continue (after reading Turing 1950) might be with some of the readings that I have listed on my Philosophy of Computer Science course webpages at: Philosophy of Artificial Intelligence and at Computer Ethics , especially: LaChat, Michael R. (1986), "Artificial Intelligence and Ethics: An Exercise in the Moral Imagination", AI Magazine 7(2): 70-79
Human beings have a certain self awareness that nobody seems to fully comprehend. Is it possible that plants and animals have this same cognition but are simply limited in their ability to communicate with the physical world? It seems scientifically unlikely but science is built on physical evidence, and thoughts are not physical. They’re metaphysical. So, we can’t really comprehend their nature, right? Are there some theologians and philosophers who’ve theorized that plants and animals have thoughts just like people?
I would like to focus on your last question: Is it possible that plants and animals have thoughts just like people? Let's take animals first. We are animals, so at least some animals have thoughts just like people. Our nearest animal relatives--the primates--probably have thoughts very much like ours, though (perhaps with a few very special exceptions) theirs differ from ours in that none of theirs are expressed in language (while some, if not all, of ours probably are). (The few very special exceptions would be those primates who have been taught various kinds of sign languages or artificial languages.) Going down the evolutionary tree, I'd be willing to say that other mammals have thoughts not unlike ours, etc. In fact, I'd be willing to say that any animal that has a suitably rich nervous system might have thoughts not unlike ours (what counts as "suitably rich" is open for debate, of course). In fact, I'll propose that having a nervous system (either biological or artificial) is a necessary...
I have a question about colors. I always wonder if other people see the same color as I see. For example, we can agree that apple's color is red, but is it possible that we are refering to different colors as RED?
First, take a look at Question 2384 and its answers, which are closely related to your question. Your question is related to what is called the "inverted spectrum", a philosophical puzzle posed by John Locke, one version of which is this: Is it possible that objects that have the color you describe as "red" are seen by me as if they had the color you describe as "green", even though I also describe them as red, and vice versa? Posing the problem is difficult; e.g., objects arguably don't "have" colors, but reflect light of certain wavelengths, which are perceived by us as certain colors. "Is the color that I perceive as, and call, red the same as the color that you perceive as what I call blue?" is another way of posing the puzzle. Part of the problem is that there doesn't seem to be any way to decide what the answer is (if, indeed, it has an answer). What experiment would decide between these? Perhaps such color-perceptions (more generally, what are called "qualia") are such that a...