Without disagreeing with anything Alex has said, let me just add one more thing: There are logicians who sympathize with this sort of question, and so who would deny that an argument with inconsistent premises is always valid. There are logics, that is to say, that do NOT validate all inferences of the form: A & ~A, therefore B, for arbitrary B. Such logics are called "paraconsistent, and if you'd like to read about them I'd recommend the Stanford Encyclopedia article as a start.
Why is C.I. Lewis' strict implication not taken seriously in this day and age?
Clarence Irving Lewis was known for criticizing material implication and for instead proposing strict implication. Why is he, his criticisms, and his proposed strict implication not taken seriously today? Many contemporary logic, philosophy, and mathematical texts refer to material implication rather than strict implication.
It should also be said that there is nowadays a lot of formal, logical work that is devoted to various forms of implication, like strict implication. Part of this is done within so-called "modal" logic; part of it is done in theories of conditionals generally; some of it concerns non-classical logics like relevant logic.
I would like to ask a kind of multiple angled question I have noticed a "lack of" while studying logic. Is "the process of elimination" a sound "Rule of Inference"? (Perhaps, we've all used this "process of elimination" in taking a multiple choice test.)
I have read two books on Logic: one by Irving M.Copi & Carl Cohen as well as The Logic Book by Merrie Bergmann, James Moor, Jack Nelson. I have not seen a single logic text nor a logic website where "the process of elimination" appears as a inference rule. Why is this not included as a rule? Is it not considered Deductive? Does it go by another name? What is the deal? Thank you for considering this question in advance.
It goes by another name, sometimes "argument by cases" or "argument by dilemma" or "the disjunctive syllogism". The basic rule is: A ∨ B ~A ∴ B Obviously, this can be extended to any number of disjuncts, e,g.: A ∨ B ∨ C ∨ D ~A ∧ ~B ∧ ~C ∴ D So the disjuncts represent the possibilities you have before you, and the negations represent your ruling out all but one of them. There are quantificational versions as well, e.g.: ∀x(Fx → x = a ∨ x = b ∨ x = c) ~Fa ∧ ~Fb ∴ Fc This would normally be derived from one of the propositional versions.
I'm having an argument with my pal.
He argues since logic prescribes (creates a standard) what is a good/bad inference (valid/invalid) it is normative.
On the other hand, I think Logic is like mathematics or physics - there are laws of logic, but they are not normative (they only describe).
Can you help us settle this beef?
Thank you, Miko
I don't know that I can settle anything. The dispute you are having is one philosophers today have generally. Some people think logic is normative, in that it prescribes rules concerning how one should think, or reason; other people think logic is purely descriptive, and that it simply tells us something about the notion of implication or validity. One reason people often given against the normative interpretation is that the norms logic provides just seem like bad ones. For example, it was once argued that, since logic tells us that A and ~A imply anything you like, then logic would be telling us that, if you reach a contradiction, you should infer that the moon is made of cheese; but, of course, what you should actually do is figure out what went wrong and give up one of the contradictory beliefs. The obvious reply, though, is that this is too simple a conception of what the norms logic prescribes are. It assumes, in particular, that if A implies B, then it is a norm that, if one thinks A, one...
How can we ever talk about what would be?
If a statement A is assumed, that's not actually true, then anything would follow since a conditional with a false hypothesis is always true. But anything (such as "P and not-P") can't be true.
This seems to show that a statement that is not true would never be true to begin with. Thus, we can't talk about what would be, only what is.
For example, I'm not driving to the store. But if I were, I'd also be swimming. Of course, though, I can't drive to the store and swim at the same time. This comes to show that so long as I'm not driving to the store, we can't ever discuss the situation where I am driving to the store, since that situation implies a contradiction.
Logicians have long distinguished between "indicative" and "subjunctive" conditionals. The terminology reflects a difference, in English, in the grammatical "mood" of the antecedent and consequent. So we have: If Kennedy was not assassinated, he is living is Columbia. If Kennedy were not assassinated, he would be living in Columbia. The view to which you refer, that a conditional with a false antecedent is always true, has certainly been held, but only about indicative conditionals. So (1), on this view, is true if Kennedy was, as we all suppose, assassinated. But it is an entirely different claim that (2) is true simply because Kennedy was assassinated, and I know of no logician who has ever held that view. This is largely because some subjunctive conditionals, such as (2), are precisely intended to report on what would have happened had things been other than we know (or at least presume) they are. Since, as you say, it would be pointless to utter such conditionals, which are known as...
I read that Gödel's incompleteness theorems don't effect Peano Arithmetic that doesn't include multiplication sign. This confuses me, since multiplication can be defined through addition. So, even if "PA without multiplication" doesn't have multiplication sign in itself, it provides everything that's needed for defining mul. sign. So what's the difference? That is, why is "PA without multiplication" (but that contains everything needed for defining mul.) different from PA (that already has multiplication defined)?
From "Gödel without tears":
"The formalized interpreted language L contains the language of basic arithmetic if L has at least the standard rst-order logical apparatus (including identity), has a term '0' which denotes zero and function symbols for the successor, addition and multiplication functions defined over numbers - either built-in as primitives or introduced by definition - and has a predicate whose extension is the natural numbers."
Is there any difference between having those...
It's true that multiplication can be defined in terms of addition, but the crucial question is: What logical resources are needed for that definition? The usual definition would be in terms of repeated addition, which means that the definition is (primitive) recursive. And now the point is that the theory we're discussing, which is sometimes known as "Pressburger arithmetic", doesn't have the resources needed for that sort of definition. So, in fact, this theory does not provide everything that's needed for defining multiplication, only some of it. Gödel does use primitive recursion to define various functions in the course of his proof. Indeed, as the proof is often presented, the central lemma is that every primitive recursive function is "representable" in PA (or whatever theory we're discussing). The details are not essential here, except that the construction crucially depends upon the presence of both addition and multiplication. (Yes: If you drop addition, then the first...
Are there any philosophers that deny that a logically derived conclusion from a series of true propositions is also true?
So far as I know, there is no one who holds quite this view. The reason is very simple. We say that an inference is logically valid just in case, whenever the premises are true, so must the conclusion be true. So if one starts with some true premises and "logically derives" some conclusion, then the conclusion has to be true, simply by definition. This assumes, of course, that our logical derivation is correct: that we haven't made mistakes, that the inferences on which we're depending really are logically valid, and so forth. Now, all of that said, philosophers can and do disagree about what inferences really are logically valid. So you might have thought that the inference from "If I go to the movie, then, if I go to the movie, then I'll have a good time" to "If I go to the movie, then I'll have a good time" is logically valid. But there are philosophers who deny that it is. In this particular case, they think, there are additional hidden premises that can be used to make it valid. But all by...
Does Hegel really reject the Law of Non-Contradiction or is that just something analytic philosophers like to say because they dislike him so much?
I don't know anything about Hegel, but I have several friends who reject the Law of Non-contradiction, and they're all perfectly respectable analytic philosophers, with lots of friends who are also analytic philosophers. So I doubt that the claim that Hegel rejects the Law of Non-contradiction, in so far as it is made by analytic philosophers, is one they make because they don't like Hegel. Most of them don't know any more about Hegel than I do, I'll wager. Personally, I'm a big fan of the Law of Non-contradiction, and I think there are good reasons not to reject it. But doing so, as I've indicated, isn't completely nuts. If you want to know about this approach to logic, read the article on dialetheism at the Stanford Encyclopedia.
A logically fallacious argument, as far as I understand should always be invalid - in every possible world.
But take a kid's argument : This is true, because my father said so. On one hand it seems obviously invalid. Such an attitude is never smart (of course, I do not imply a case in which the father is known to be an expert in something, and therefore is a valid authority, but a kid's childish attitude).
However, there is a possible world in which the father of the kid is omniscient and always telling the truth. It seems a logical possibility. But, if it is a logical possibility, then one cannot argue the argument is _logically_ invalid.
A logically valid argument is one that has the property that, if its premises are true, then its conclusion must also be true. It's a nice question how exactly one wants to spell that out, but if we play along with the talk about "possible worlds", then we can say: A logically valid argument is one that has the property that, if, in any given world, its premises are true, then, in that world, its conclusion must be true. On that understanding, the kid's argument is logically invalid. True, there are some worlds in which everything the kid's father says is true. But that is not enough. For the argument to be valid, this has to be so in all worlds, and it obviously isn't. I think the confusion here may be caused by the phrase "always...invalid, in every possible world". Validity is preservation of truth in every world. But what happens in every world isn't really affected by which world you are in, so talk of what's valid in every world doesn't really make sense. What's valid in one world...
I was reading a text claiming that people who believe that God is contingent may be uncomfortable with the implications of contingency. The author cited the Barcan formula. Could you please explain what this formula means and why it's controversial? I'm not great at logic. Thanks!
Wikipedia has a decent entry on the Barcan formula. It is generally held to imply that nothing exists contingently, and that in turn is generally thought insane. I would seem to be a good example of something that exists only contingently. But there are some people who think the Barcan formula can be defended, and it would be nice if it could because it makes certain aspects of modal logic much easier than they otherwise are. That said, I am finding it hard to imagine why the Barcan formula and its consequences would be relevant here. If you believe that God exists contingently, then you think the Barcan formula is false. Since not many people accept it, that isn't much of a loss.