Advanced Search

I have a question about "solved" games, and the significance of games to artificial intelligence. I take it games provide one way to assess artificial intelligence: if a computer is able to win at a certain game, such as chess, this provides evidence that the computer is intelligent. Suppose that in the future scientists manage to solve chess, and write an algorithm to play chess according to this solution. By hypothesis, then, a computer running this algorithm wins every game whenever possible. Would we conclude on this basis that the computer is intelligent? I have an intuition that intelligence cannot be reduced to any such algorithm, however complex. But that seems quite strange in a way, because it suggests that imperfect play might somehow demonstrate greater intelligence or creativity than perfect play. [If the notion of "solving" chess is problematic, another approach is to consider a computer which plays by exhaustively computing every possible sequence of moves. This is unfeasible with...

This is a very good question. It is reminiscent of the debate over the so-called "Turing Test", in particular, of an objection to the Turing Test made by Ned Block: his "Blockhead". See the SEP article on the Turing Test for more on this. In the case of chess, it is generally believed that chess is solvable in principle. There are only finitely many possible moves at any stage, etc. So, in principle, a computer could check through all the possibilities and determine the optimum move at each stage. Practically, this is impossible at present, as there are too many moves. But if chess had been solved, and if a computer were simply programmed to make the best move at each stage, then it seems quite clear that no "intelligence" would be involved. Of course, this does not by itself show that "intelligence cannot be reduced to any...algorithm", and the question whether it could be is hotly disputed. There are some famous (or infamous) arguments due to Lucas and Penrose that attempt to establish...

Is it wrong to fantasize about sex with children? If a pedophile never acts on their fantasies are they still guilty of having evil thoughts, assuming that their abstinence comes out of a genuine desire not to do harm?

So far as I can see, there's nothing wrong with fantasizing about sex with children. There's nothing wrong with fantasizing about anything you like. If that seems crazy, then it's probably because you are thinking that someone who fantasizes about something must actually wish to do that thing. But that is just not true. As Nancy Friday makes very clear in My Secret Garden , her classic and groundbreaking study of female sexual fantasy, fantasy is not "suppressed wish fulfillment". The point runs throughout the book, which you can find on , but maybe the best statement is on pp. 27-8, though see also the poignant story that opens the book (pp. 5-7). I'd post an excerpt, but the language maybe isn't appropriate for this forum! As Friday's studies reveal, people fantasize about all kinds of things. Some women fantasize about being raped. It's a very common fantasy, in fact. That does not mean these women actually want to be raped, on any level. As Friday remarks, "The message...

If we assume that both computers and the human mind are merely physical, does it follow that a sufficiently advanced computer could do anything that a human brain could do?

No, because the mere physicality of the brain does not imply that the brain is any kind of computer. Maybe the brain is capable of various sorts of quantum computations that would allow it to perform tasks that no computer, even in principle, can perform. Who knows? Indeed, some people have argued that we can prove that the human mind can do things no computer can do, and these arguments do not imply that the mind is in any way non-physical. I think those arguments are no good myself, but they make this point anyway.

I hope this makes sense... I've always been curious about attempts to understand the way our minds work. To me, it seems paradoxical and in some ways even hopeless. I suspect that in order for the mind to understand or learn something new, the mind itself (or at least the way it works) needs to be more complex than what it it processing. In other words, the "size" of the new information cannot exceed the "capacity" of the mind itself in order to store it. An example of this would be the way computers work: Let's say I have a PC with an old operating system (Windows 2000) and I wish to run a software CD designed for a more advanced operating system (Windows 8). My old computer will most likely not recognize any of the information on that new CD, either because my old computer requires more free space (capacity of mind) or because the information stored on that CD requires a different kind of technology to decrypt (complexity of idea). Thus, you can use a computer to fully process programs (according to its...

I don't work in this sort of area myself, but this kind of view has been held. The position is known as mysterianism , and its main proponent is Colin McGinn . Considerations in the same ballpark also fuel the (in)famous arguments against mechanism due to John Lucas. What certainly does seem clear is that this kind of possibility can't be ruled out a priori. Surely there are some things human minds simply could not ever understand. That's true of all other creatures. Cats, for example, clearly do not have minds complex enough to understand calculus, let alone the nature of their own minds. We all have cognitive limitations. Perhaps we are in a similar position with respect to our minds. But it is not obvious either that our minds are limited in this particular way. The "self-reflective" aspect of understanding our own minds does not, by itself, show that we couldn't possibly do it. Your references to complexity and the like are suggestive, but there are many ways to measure of...

What would a robot have to be able to do, or what would it have to be, for us to consider it a sentient being as opposed to a non-sentient automaton? Please note I am using the term "robot" here in a broad sense, including such obviously sentient (fictional) constructs such as C-3PO of Star Wars fame. I don't consider "robot" and "sentient being" to be mutually exclusive terms. I'm interested in what fundamentally distinguishes sentient beings from automatons that merely mimic sentience.

The other classic paper on this issue is Alan Turing 's "Computing Machinery and Intelligence", from 1950, which articulates what has come to be known as the " Turing Test ". Turing's idea was to set up an experiment. A modern version might use some kind of internet chat program. You are talking with two other "people". One really is a person. The other is a computer. You can talk to them for as long as you like, about whatever you like. Then if you can't tell the difference, Turing says, the computer is intelligent. Obviously, this is, at first blush, what Andrew calls an "epistemological" approach to the problem, but Turing doesn't see it just that way. Let me mention, by the way, that 2012 is also the " Alan Turing Year ", celebrating the 100th anniversary of his birth. Turing had a very interesting, and tragic, life. Not only was he one of the founders of modern computer science, he put his genius to work for the British military during World War II and helped crack the German codes ....

Has philosophy adequately dealt with the mind-body problem? I am looking for a serious answer from a person who is genuinely passionate about philosophy and not mere deferrals of the question through cliche stances so abundantly available amongst hobbyist-philosophers. Not to worry I am not out to justify some sort of theological stance, I am merely curious if professional philosophers are still concerned by this question or its derivatives. I would be very grateful for a response.

I'm not sure what's meant by "adequately dealt with", but if it means something like, "Come up with an answer that satisfies a fairly large group of people", then no, I don't think so. But to the other question, whether philosophers today still care about the mind-body problem, the answer is undoubtedly that they are. You might start here: . The problem isn't that no-one has any good ideas what to say about mind and body, it's rather that too many people have too many good ideas, and the problem is fantastically hard. So hard that some philosophers, such as Colin McGinn, have argued that human beings are cognitively incapable of solving it (just as, say, dogs are cognitively incapable of even fairly basic mathematics). I don't say McGinn is right, just that one shouldn't assume the contrary.

Can we imagine a being who genuinely believes a bald-faced, explicit contradiction (such as that "murder is right, and murder is not right")? Or is there something in the very idea of belief which makes this, not only contingently unlikely, but necessarily impossible?

I know several people who believe such things, or at least say they do. One group thinks that there are true contradictions that involve very special cases. The usual example is the so-called liar sentence, "This very sentence is not true". There is a simple argument that the liar sentence is both true and not true, and some people believe just that. Other people, though, think there are contradictions involving much less special cases. An example would be what are called "borderline cases" of vaguepredicates, like "bald". People often want to say that there are somepeople who aren't bald and aren't not bald either. But the so-called DeMorgan equivalences entail that this is equivalent to saying that theperson is both bald and not-bald (or, strictly, both not-bald andnot-not-bald). People who hold such views are known as "dialetheists". See this article for more.

Is there any objective, scientific way to prove that we all see colours the same? I know it's one thing for two people to point at an object and agree on its colour, even the particular shade, but there's no way that I can tell whether or not the next person in line sees everything in shades of greys, or in negative. We can even study how light interacts with objects and enters our eyes, without truly knowing if one person would see everything the same if he suddenly were able to see though another's eyes. So, is there any proof that we all do see colours the same? Maybe even proof or evidence to the contrary? If that's so, I must say that you're all missing something great from where I can see.

This is a much discussed question, which often appears in the guise of the " inverted spectrum hypothesis ": One might wonder whether some other person sees what you see as red the way you see green, etc. It turns out it can't be quite that simple, but one might nonetheless wonder whether we do all see colors the same way. In fact, Ned Block has argued that there is some empirical evidence that we don't all see colors the same way. (See this paper .) It goes without saying that this is very controversial.

Abigail and Brittany Hensel, born 1990, midwest USA A very rare, dicephalus pair, they have separate heads and necks, but share one torso and a pair of legs. Each has her own heart and stomach, and controls the limbs and feels sensation exclusively on her own side. They share three lungs and, below the waist, a single set of organs. Physically they move as one, in perfect co-ordination. Mentally they are independent, with different preferences and abilities. Their parents are opposed to separation, which would be highly dangerous. Even if successful, the girls would be left severely disabled, and unable to enjoy walking, running, swimming and bike riding which, together, they can do easily. I am a Cartesian Dualist - I think! Does this situation above not solve the Mind/Body, Mind/Brain problem?

I think I'm confused. The two girls have two brains---one each. So I don't see any threat here to mind--brain identity. There is something philosophically interesting about the fact that the girls, together, can ride a bike, etc, using their shared torso, etc, and I'd be interested to hear what people who work on the body would have to say about them. One would really need to know a lot more about them---about what their ability to control "their" legs are like, etc. But I don't see any threat here either to dualism---though dualism does have its own share of problems.