Suppose P is true and Q is true, then it follows logically that P --> Q, that Q --> P and therefore that P Q. Now, suppose that P is 'George W. Bush is the 43rd President of the US' and Q is 'Bertrand Russell invented the ramified theory of types', both propositions are true, and therefore the truth of both guarantees the truth the aforementioned propositions. But it seems bizarre to say that Russell's invention of the theory of types entails that Bush is the 43rd president, as well as the other logical consequences. After all we can conceive of a scenario where Russell invents the ramified theory of types, but Bush becomes a plumber (say), if that is a possible scenario, it would seem that the proposition "If Russell invents the ramified theory of types then Bush is the 43rd President of the US" is false given the definition of 'if then'. But after all, does it make sense to say that a proposition entails another only in the actual world? (That doesn't seem to have as much generality as we...

To give a similar but somewhat different answer, one might think the problem with the line of reasoning in the question comes here: "But it seems bizarre to say that Russell's invention of the theory of types entails that Bush is the 43rd president...". We were talking about the statement, "If Russell invented the theory of types, then Bush was the 43d president", and now we're talking about entailment? Why? What do these have to do with each other? The move from talking about the truth of conditionals to talking about entailment is what lies, in many ways, behind the invention of (formal) modal logic, by Lewis and Langford in the 1920s. One of the central ambitions of early modal logic was to formalize the notion of entailment. It was with reference to this that Quine spoke of modal logic's being "conceived in sin, the sin of confusing use and mention"---of confusing "if p then q" with "`p' entails `q'". Now, that said, it is undoubtedly a serious question whether the English indicative...

Hi. Take the following syllogism : John believes that green people should be killed. Mushmush is a green person, a neighbour of John. ====================== Thus, John believes that Mushmush should be killed. Formally, the argument seems valid. However, in reality it doesn't work. A persona can believe that all people with quality X should be killed, but not think it about a specific person he knows. So is there a logical contradiction here? What happens? Thank you, Sam

With all due respect to Professor Green (hi, Mitch!), even that is not the final word. I think perhaps Professor Nahmias was assuming that John knows perfectly well that Mushmush is a green person, Mushmush being his neighbor and all that, and that John has some minimal degree of logical competence. Still in that case, most people would hold that it does not logically follow that John believes that Mushmush should be killed. There are two quite different reasons for this. One involves the fact that we cannot, even in principle, actually deduce all the logical consequences of everything we believe. It seems extremely plausible, in fact, that there are propositions of the form "All F are G" and "x is an F" that I believe, where I do NOT believe the corresponding proposition of the form "x is G", simply because I have never gotten around to inferring it. Note carefully that the claim is not that I believe that x is NOT G, just that I fail to believe that it is. In this kind of case, though, you...

Hello. What exactly is completeness in logic? What makes some system of logic complete? And what is incompleteness?

The notion of completeness for logics links two notions: A notion of what is provable or deducible in some formal system of logic, and a notion of what is valid , which is itself defined in terms of a notion of interpretation. It's probably best to think of the latter as primary. We have some system of logical notation, and we have a way of interpreting it that gives rise to a class of "valid" formulas. What we'd like to have then is a proof-method that will be complete in the sense that, if a given formula is valid, then it will be provable by that method. More generally, we can think not just of the class of valid formulas but of some notion of implication or entailment that relates formulas: So we say that some bunch of formulas A, B, C, ... entail some other formula Z. Then what we want is a proof-method that will be complete in the sense that, if Z really does follow from A, B, C, ..., then there is a way of deriving Z from A, B, C, ..., by the proof-method. You can't always have...

I have a question about the identity of a certain kind of fallacy, namely: A = C B = C therefore A = C Confusingly, I have read that the above syllogism is valid; and yet consider this argument I've heard recently: Obama = Good speaker Hitler = Good speaker therefore Obama = Hitler Clearly the latter is a fallacy. So, I have two questions, really: 1) What is the name of this fallacy? 2) How can it be a fallacy if the first syllogism (A = C, B = C, therefore A = C), whose form it follows, is considered to be valid . . . or am I wrong about it being valid?

And, to add to all the confusion, one can say: Obama is identical to a good speaker; and also: Hitler is identical to a good speaker. But it certainly doesn't follow that Obama is Hitler. The reason, in this case, is because what stands on the right-hand side of the identity here is not a name, but what philosophers and linguists call an "indefinite". Exactly how indefinites work is a matter of some controversy, but one (older) way to resolve this puzzle is to treat "Obama is identical to a good speaker" as meaning: There is a good speaker with whom Obama is identical. Or, in logical symbolism: (∃x)(good-speaker(x) & x = Obama) Now the fallacy should be clear. The crucial point is that the only really well-defined notion of validity in logic is one that applies only to formal, logical representations. To apply the notion of formal validity to arguments in ordinary language, one has to "translate" the ordinary arguments into logical notation, and it is not always clear how this is to be...

Are logical laws such as the de Morgan's ones preserved under modalisations? For example, what are the truth conditions for the following sentences: Peter knows that Mary does not invite Paul and Peter. Peter knows that it is possible that Mary does not invite Paul and Peter.

I'm not sure precisely what is being asked here. The first sentence is true if the following is something Peter knows: Mary does not invite (both) Paul and Peter. Perhaps there is another reading under which it is true if the following are both things Peter knows: Mary does not invite Paul; Mary does not invite Peter. But this isn't likely to be a significant difference, under most accounts of knowledge attributions. Similar things can be said about the second sentence. What I don't understand is what any of this has to do with the de Morgan laws. These say, among other things, that something of the form ~(A & B) is logically equivalent to the related thing of the form ~A v ~B. But neither of these sentences is of that form, unless we're talking about the second mentioned reading of the first one. And, in that case, yes, it certainly is equivalent. What might be at issue is whether, e.g., "X knows: ~(A & B)" has to be equivalent to "X knows: ~A v ~B", which is to ask whether substitution of...

A common discussion-killer is the declaration: "You can't prove a negative!" Immediately the conversation screeches to a halt and people turn to other topics. Is there really nothing more to be said? A: Fairies don't exist. B: You can't prove a negative. A: Okay, fair enough. So how do you like this pizza? Does it have to be this way?

Perhaps part of the problem is the word "prove", which also tends to get used when talking about such things as the existence of God. (No-one can prove that God exists, we're often told.) As our erstwhile leader, Alex George, has often pointed out, however, outside mathematics, one can rarely "prove" anything. So to be told in that sense that no-one can "prove" a negative is unhelpful. One can't "prove" a positive in that sense, either. As Peter said, more or less.

What do derivation systems in a formal logical language tell us about logic? Or about the propositions in the proof? Are their purpose only to show us that a particular proof or argument can be demonstrated using that particular language? IN other words, why do we have derivations in formal logic ... what is their grand purpose?

Peter always gets to these before I do. I agree with what he says, but will add a couple points. First, modern logic emerges in the work of Gottlob Frege, one of whose contributions was the first formal system of logic. Frege is explicit about his motivations. Here's a passage from his paper "On Mr Peano's Conceptual Notation and My Own", from 1897: "I became aware of the need for a concpetual notation [formal language] when I was looking for the fundamental principles upon which the whole of mathematics rests. ...For an investigation such as I have in mind here, it is not sufficient just for us to convince ourselves of the truth of a conclusion, as we are usually content to do in mathematics; on the contrary, we must also be made aware of what it is that justifies our conviction, and upon what primitive laws it is based. For this are required fixed guiding-lines, along which the deductions are to run; and in verbal languages these are not provided. If we try to list all the laws governing the...

I always assumed that there could be no contradictions -- that the principle of non-contradiction was absolute, so to say. Recently, however, I read about dialetheism and paraconsistent logic and realized that some philosophers disagreed. It seems all of logic falls apart if contradictions are permitted. I fail to understand how their position makes any sense (which could admittedly be just a failure on my part). So is it possible someone could better explain their viewpoint? Surely none of them believe that, say, one could simultaneously open and close a book, right?

So far as I know, no "dialethists" believe that all contradictions are true. But there is a significant disagreement about whether it's just weird cases, like the liar, that give rise to contradictions, or whether there might be contradictions that are in some sense observable. Graham Priest thinks there are; moderates like J.C. Beall think there aren't. The case of simultaneously opening and closing a book leads naturally to issues about vagueness. It's natural to think that it's vague whether a book is closed. Take an obviously closed book and then "open" it a nanometer. Surely a nanometer can't make a difference to whether the book is closed, can it? (If you think it can, try a picometer. Or something even smaller.) But then, lots of nanometers add up to a centimeter, which surely can make a difference. So, a dialethist might say, if we take a "borderline case", that will be a case where the book is both open and closed. (If you're inclined instead to say that it's a case where it's neither open...

Is this argument valid?: A) The sky is blue. therefore B) 2+2=4 It may not seem that the premise is relevant to the conclusion. But an argument is supposed to be valid if its premises cannot be true without its conclusion being true. B is a necessary truth (we can imagine a world in which the sky is red, but a world in which 2+2=5 is just incoherent). B is always true, therefore B must be true in cases in which A is true. So this must be a valid argument. There's something horribly wrong with this thinking, but I can't figure it out.

It may not help very much, but the argument you describe is not (usually considered to be) logically valid. It's true that 2+2 could not have been other then 4, but almost no-one nowadays would suppose that it was logic that guaranteed that fact. So we might say that the argument is "mathematically valid", since there is no mathematical possibility of its conclusion being false while its premise is true. One other thing worth saying is that one shouldn't confuse the question what implies what with the question what you should infer from what. It does indeed seem silly to infer that 2+2=4 from the premise that the sky is blue (or pink, or green!). But it only follows that "the sky is blue" doesn't imply "2+2=4" if you assume that correct implication should always entail reasonable inference. But you shouldn't assume that.

Logic textbooks which offer a system of natural deduction containing a so called "rule of replacement" restrict this rule to logically equivalent formulae. Only these can replace each other wherever they occur. I have often wondered why this is so. It seems to me that, having e.g. p q and p&r as lines in a proof (as premisses, say), would allow one to soundly infer q&r directly from them by replacement of p by q in p&r, without requiring that p and r be logically equivalent. In less formal situations, for example, when solving a math problem, I find myself (and others) doing this all the time. I've searched the internet for this, but couldn't find any answer so far. Most grateful in advance for a reply.

There's a need for some care here. In classical logic, one certainly does have the rule: From A <==> B and ...A..., infer ...B..., in the sense that the former things will always imply the latter thing, and also in the sense that, in any complete system, applications of this rule could always be replaced by applications of other rules. So, in such systems, this is what is sometimes called a "derived rule". But we do not always have this rule. Indeed, there are plenty of systems in which one does not have it. A simple example is a modal logic that has the additional operator ◊, meaning "possibly". It certainly does not follow from A <==> B and ◊A that ◊B. It does follow (at least in the usual systems) if A <==> B is not just true but a theorem.

Pages