A recent questioner asked if there are any more dialogue-based--as opposed to strict question-and-answer format--places on the internet to discuss philosophy. The replies took the questioner to be implying a kind of unregulated "philosophical chat room" where anyone can throw out their dubious reasoning and call it philosophy. That may characterize many internet forums, regardless of the subject matter, but there is, I think, a middle ground between this site's ask-the-experts format (which I greatly appreciate, don't get me wrong!) and chats/blogs by people who are totally unqualified to comment meaningfully on philosophical issues. Are there any blogs that you would *recommend* for the level of discourse that, at least sometimes, is displayed there between professional philosophers and, perhaps, thoughful "lay-people" (i.e., where philosophically disciplined and thoughtful people talk to each other)?

Here are two suggestions. The first is less of a philosophy blog and more of a metaphilosophy blog, but it often has useful links to other blogs that you might like: "Leiter Reports: A Philosophy Blog / News and views about philosophy, the academic profession, academic freedom, intellectual culture...and a bit of poetry" http://leiterreports.typepad.com/ The second is also not quite a philosophy blog itself but a philosophy metablog, with summaries and pointers to other philosophy blogs: "Philosopher's Carnival" Its location seems to move around and is always posted on the Leiter Report. The current version is at: http://ichthus77.blogspot.com/2012/02/philosophers-carnival-138.html

How is "philosophical progress" made, assuming it is made at all? And on a related note, are philosophical theories ever completely abandoned (considered "wrong"), or do they simply adjust to criticism?

The philosopher Benson Mates once characterized philosophy as a field whose problems are unsolvable. This has often been taken to mean that there can be no progress in philosophy as there is in mathematics or science. But I believe that solutions are always parts of theories, hence that acceptance of a solution requires commitment to a theory. Progress can be had in philosophy in the same way as in mathematics and science by knowing what commitments are needed for solutions. In a sense, this means that sometimes philosophy "progresses" backwards , by coming to understand what extra assumptions are needed to solve its problems. (I've written about this in a technical paper-- "Unsolvable Problems and Philosophical Progress" (American Philosophical Quarterly 1982)--and in an essay for a non-technical audience-- "Can Philosophy Solve Its Own Problems?" (SUNY News 1984). There was also a recent symposium on this topic at Harvard, and some of the talks from that symposium can be Googled online.

Does the Turing test, the attempt to verify the proposition "Machines can think" through an 'imitation game', come down to a confusion over "like" and "identical with"? i.e can I say the following "If it is like x is thinking, therefore what x is doing is identical to thinking"?

That's one interpretation, but there are many others. My favorite interpretation focuses on this passage in Turing's classic 1950 essay, "Computing Machinery and Intelligence" (Mind 59:433-460): I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. Of course, that century ended around the year 2000, and Turing's predicted "alteration" hasn't yet happened. But that's beside the point. Turing's claim, according to this passage, is that, if computers (better: computational cognitive agents or robots) pass Turing tests, then we will eventually change our beliefs about what it means to think (we will generalize the notion so that it applies to computational cognitive agents and robots as well as humans), and we will change the way we use words like 'think' (in much the same way that we have generalized what it means to fly or to be a computer,...

Do false statements imply contradictions? Consider the truth table for logical implication. P...........Q.............P-> Q T...........T.............. T T...........F...............F F...........T...............T F...........F...............T Notice that for a false statement P, the last two rows of the truth table, both Q and ~Q follow. No matter what Q is, it's truth follows from false statement P, as the third row shows. We can therefore take Q to be "P is true." From here it follows that a false statement P implies it's own truth, as the third row shows. Do false statements really imply their own truth? Do they really imply contradictions? Are false statements also true?

One branch of logic that deals with an alternative to material implication, and that has applications in artificial intelligence, is called "relevance logic". For more information on it, take a look at: Anderson, Alan Ross, & Belnap, Nuel D., Jr. (1975), Entailment: The Logic of Relevance and Necessity (Princeton, NJ: Princeton University Press) -- especially the introductory chapters that present arguments as to why relevance logic is "better" than classical logic. Lepore,Ernest (2000), Meaningand Argument:An Introduction to Logic through Language (Malden, MA: Blackwell),§A3 ("Conditionals"), esp. §A3.1.1 ("Paradox ofImplication), p. 317, and §A3.1.3 ("Paradox of ImplicationRevisited"), pp. 319-320. For a literary discussion of what happens when a computer or robot usesclassical logic, see: Asimov, Isaac (1941),"Liar!", Astounding Science Fiction ;reprinted inIsaac Asimov, I, Robot (Garden City, NY: Doubleday),Ch. 5, pp. 99–117. And for applications to AI,...

I recently had a colonoscopy under an anesthetic that caused complete amnesia. An observer could see I was in extreme pain during the procedure yet I have no recollection. How does a philosopher think about the pain I experienced but do not recall?

Daniel Dennett discussed a fictional drug that he called an "amnestic" that allows you to feel pain, but paralyzes you so that you don't exhibit pain behavior, and leaves you with amnesia. Pleasant, no? For the details and his philosophical analysis, read: Dennett, Daniel C. (1978), "Why You Can't Make a Computer that Feels Pain", Synthese 38(3) (July): 415-456; reprinted in his Brainstorms: Philosophical Essays on Mind and Psychology (Montgomery, VT: Bradford Books (now Cambridge, MA: MIT Press), 1978): 190-229.

Hi I understand how to apply derivation rules like the rules of inference etc. My question is do we have a method of proving the rules themselves? Is there a way to prove that If P then Q; P; therefore Q? Or do we accept these rules out of intuition?

Rules of inference are "primitive" (i.e., basic) argument forms; all other arguments are (syntactically) proved using them. So you could either say that the rules of inference are taken as primitive and not (syntactically) provable, or you could say that they are their own (syntactic) proofs. However, the way that they are usually justified is not syntactically, but semantically: For propositional rules of inference, this would mean that they are (semantically) proved by means of truth tables. A rule such as Modus Ponens (your example) is semantically proved (i.e., shown to be semantically valid) by showing that any assignment of truth values to the atomic propositions (P, Q in your example) that makes all of the premises true also makes the conclusion true.

When I multiply 2 by 2, is it by a form of reasoning that I produce the result, or rather mere memorization? Does the same hold for multiplications of larger numbers, or arithmetic operations generally?

When an elementary-school student is learning how to multiply, the result of multiplying 2 by 2 is probably produced by a form of reasoning (perhaps repeated addition). When you or I do it, it's probably done by rote memory. But when any of us multiply two 6-digit numbers, it's almost certainly by "reasoning". (Maybe those with "savant syndrome" do it by some kind of memory-like process, or maybe it's just by very fast, unconscious reasoning.) But the "reasoning" we use for multiplying those larger numbers consists of applying the multiplication algorithm, among whose steps are instructions to multiply single-digit numbers together (like 2x2, or 9x8). And those multiplications are probably done by "memory" (what computer scientists call "table look-up"). That's because multiplication is a recursive procedure: We multiply large numbers by applying the multiplication algorithm, which requires us to multiply smaller numbers, eventually "bottoming out" in the base case of table-look up of the...

Pages