I would really like to know what logic is. The Stanford Encyclopedia of Philosophy has TOO MANY articles on logic for someone like me. Let me list most of them: action logic, algebraic propositional logic, classical logic, combinatory logic, combining logic, connexive logic, deontic logic, dependence logic, dialogical logic, dynamic epistemic logic, epistemic logic, free logic, fuzzy logic, hybrid logic, independence friendly logic, inductive logic, infinitary logic, informal logic, intensional logic, intuitionistic logic, justification logic, linear logic, logic of belief revision, logic of conditionals, logical consequence, logical pluralism, logical truth., many-valued logic, modal logic, non-monotonic logic, normative status of logic, paraconsistent logic, propositional dynamic logic, provability logic, relevance logic, second-order and higher-order logic, substructural logic, temporal logic. I have started reading some of these articles, but I still didn't find an answer for my basic question. In...

At the risk of a bit of self-promotion, readers might find my introductory article on logic for the Encyclopedia of Artificial Intelligence to be helpful. You can read it online at http://www.cse.buffalo.edu/~rapaport/Papers/logic.pdf.

Do philosophers use computers to find logical proofs? Or are there good reasons the task of programming a computer to do so is difficult (perhaps because of the complexity of proof required, or perhaps because you need a human for some sort of creative step)? Just from my experience of undergrad logic, it seemed to me there was a lot of repetition in what I was doing, and that it was a task I could learn and get better in -- ie, it wasn't down to pure creativity, but there were learnable, repeatable methods of searching that perhaps could be codified, made systematic.

The short answer to your first question is "not usually". The short answer to your second question is: It is difficult because of the complexity of the proof. Verifying a proof is, indeed, "codifiable" ("computable" is the technical term) and relatively easy to program (with an emphasis on "relatively"!). Creating proofs is rather more difficult but can also be done, especially if the formula to be proved is already known to be provable. Finding new proofs of unproved propositions has also been done, but is considerably more difficult and is the focus much research in what is called "automated theorem proving". One of, if not the, first AI programs was the Logic Theorist, developed by Nobel-prize winner Herbert Simon, Allen Newell, and Cliff Shaw, in 1955. So this is an area that has indeed been looked at. A rule of inference known as "resolution" is used in automated theorem proving and lies at the foundation of the Prolog programming language ("Prolog" = "PROgramming in LOGic"). When...

On April 10, 2014, in response to a question, Stephen Maitzen wrote: "I can't see how there could be any law more fundamental than the law of non-contradiction (LNC)." I thought that there were entire logical systems developed in which the law of non-contradiction was assumed not to be valid, and it also seems like "real life" suggests that the law of non-contradiction does not necessarily apply to physical systems. Perhaps I am not understanding the law correctly? Is it that at most one of these statements is true? Either "P is true" or "P is not true"? or is it that at most one of theses statements is true? Either "P is true" or "~P is true"? In physics, if you take filters that polarize light, and place two at right angles to each other, no light gets through. Yet if you take a third filter at a 45 degree angle to the first two, and insert it between the two existing filters, then some light gets through. Based on this experiment, it seems like the law of non-contradiction cannot be true in...

There are real-life situations in which contradictions can appear. Consider a deductive AI knowledge base that can use classical logic to infer new information from stored information (think of IBM' s Watson, for example). Suppose that a user tells the system that, say, the Yankees won the World Series in 1998. (It doesn't matter whether this is true or false.) Suppose that another user tells it that the Yankees lost that year. Now the system "believes" a contradiction. So, by classical logic, it could infer anything whatsoever. This is not a good situation for it to be in. One way out is to replace its classical-logic inference engine with a relevance-logic inference engine that can handle contradictions. For example, the SNePS Belief Revision system will detect contradictions and ask the user to remove one of the contradictory propositions. (It can also try to do this itself, if it has supplementary information about the relative trustworthiness of the sources of the...

Is it possible for two tautologies to not be logically equivalent?

Stephen Maitzen raises some interesting philosophical issues, but, of course, his response is not the "textbook" answer to the question (but, then, isn't that what philosophy is all about? : Questioning "textbook" answers? :-) The "textbook" answer would go something like this: By definition, a tautology is a "molecular" sentence (or proposition---textbooks differ on this) that, when evaluated by truth tables, comes out true no matter what truth values are assigned to its "atomic" constituents. So, for example, "P or not-P" is a tautology, because, if P is true, then not-P is false, and their disjunction is true; and if P is false, then not-P is true, and their disjunction is still true. Furthermore, by definition, two sentences (or propositions) are logically equivalent if and only if they have the same truth values (no matter what truth values their atomic constituents, if any, have). So, because tautologies always have the same truth value (namely, true), they are always logically...

In paradoxes such as the Epimenides 'liar' example, is it not sufficient to say that all such sentences are inherently contradictory and therefore without meaning? Like Chomsky's 'the green river sleeps furiously', it's a sentence, to be sure, but that's all it is. Thanks in advance :)

Chomsky's sentence was actually: "Colorless green ideas sleep furiously". Several people have argued that, embedded in the right kind of context, it can be taken as meaningful. For some examples, see a handout from one of my courses here.

Is it true that anything can be concluded from a contradiction? Can you explain? It's seems like its a tautology if taken figuratively because we can indeed conclude anything if we suspend the rules of reasoning, but there is nothing especially interesting in that fact in my humble opinion.

It's not just that disjunctive syllogism breaks down, but that the conclusion Q is, in general, irrelevant to the premise, which only talks about P (and Not-P). So, in Bertrand Russell's famous version, given a contradiction about, say, arithmetic ("2+2=4 & 2+2=5"), you can use the derivation given by Maitzen to prove that Russell (a famous atheist) is the Pope. For interesting (and amusing) arguments in favor of the importance of relevance, see the early chapters of Anderson, Alan Ross, & Belnap, Nuel D., Jr. (eds.) (1975), Entailment: The Logic of Relevance and Necessity, Vol. I (Princeton, NJ: Princeton University Press). Relevance logics, a form of paraconsistent logic, have found important applications in artificial intelligence, where it is desirable, in devising a computational model of a mind, to have it use a system of logic that does not lead to irrelevancies. For discussion on that topic, see Shapiro, Stuart C. & Wand, Mitchell (1976), 'The Relevance of Relevance' , ...

If the sentence "q because p" is true, must the sentence "If p then q" also be true? For example, "the streets are wet because it is raining," and the sentence "if it is raining, then the streets are wet." Are there any counter-examples where "q because p" could be true while "If p then q" could be false?

Suppose that "q because p" is true. I would say that it follows that both q and p have to be true. But, in that case, "if p then q" is also true (assuming that the English "if...then..." expression is interpreted as the material conditional). However, "if q then p" is also true! So, there doesn't seem to be much of an interesting connection between the causal sentence and the conditional sentence.

If the sentence "If p then q" is true, must the sentence "q because p" also be true? For example, "if it is raining, then the streets are wet" and the sentence "The streets are wet because it is raining." Are there any counter-examples where "If p then q" could be true while "q because p" could be false?

The answer to your first question is: No. Let's take your example: Suppose that it is true that it is raining. And suppose that it is true that the streets are wet. Then, by the truth table for the material conditional (which is the default interpretation of the English "if…then…" locution), the sentence "If it is raining, then the streets are wet" is true, because both antecedent and consequent are true. But it might have been the case that the reason that the streets are wet is that someone was cleaning the street with water before the rain began, so that it is false that the streets are wet because it is raining. And there's your counterexample. The one possible piece of wiggle room would be for someone to claim that the material conditional is not the correct interpretation of "if…then…" in this case.

What is the truth maker for logic? In other words, why should I take logical truths (e.g., material implication) as true?

A few points need clarification before I can begin to answer your question. First, logic is not concerned with truth in the way that, say, the sciences are. Logic is concerned with relationships among sentences that have truth value, not with the actual truth values of the (atomic) sentences. The only apparent exception to this might be those sentences that "must" be true (tautologies) and those that "must" be false (contradictions). But tautologies and contradictions are not atomic sentences; they are "molecular" sentences, and what makes them tautologous or contradictory are the relationships among their atomic constituents. So, for instance, "(p & ¬p)" is a contradiction because—no matter what the actual truth value of p—the truth value of "(p & ¬p)" must be false (because of the truth tables for conjunction (&) and negation (¬)). Logic isn't concerned with p's actual truth value. Second, material implication (→) is not a "logical truth" nor is it even a sentence. It's a...

Are first principles or the axioms of logic (such as identity, non-contradiction) provable? If not, then isn't just an intuitive assumption that they are true? Is it possible for example, to prove that a 4-sided triangle or a married bachelor cannot exist? Or must we stop at the point where we say "No, it is a contradiction" and end there with only the assumption that contradictions are the "end point" of our needing to support their non-existence or impossibility?

To prove a proposition is to derive it syntactically (that is, by "symbol manipulation" that is independent of the proposition's meaning). A "good" (or syntactically valid) derivation is one that begins with "first principles" (axioms) and derives other propositions from them (and from other validly derived propositions) by rules of infererence. Ideally, the rules of inference should be "truth preserving": If you start with true axioms, then all of the propositions derived from them by the rules should also be true. So, can you prove the axioms? If so, how? The uninteresting answer is, yes, you can prove them (in a technical, but trivial, sense) just by stating them, because they don't need to be derived by any rules from anything "more basic". So, how do you know that they are true? Well, truth and proof are two different things. Proof has to do with syntax, or valid derivation. Truth has to do with semantics, or meaning. Ideally, truth and proof should match up: A formal system ...

Pages