Where on earth did Philosophers get the idea that "just in case" means "if and only if"[1] instead of "in the event of"? I ask just in case there's a legitimate reason for the apparently willful muddying of language! [1] for example http://www.askphilosophers.org/question/2290

I recall someone sending me a short paper complaining about the linguistic tic of using "just in case" to mean "if and only if" when I first started editing Analysis 20 years ago. So, rightly or wrongly, this has been going on for a while! But note, we can't grammatically substitute "in the event of" for "just in case" in e.g. "I'll buy some tofu just in case some guests are vegan". And the latter doesn't mean the same as "I'll buy some tofu just in the event that some guests are vegan" either. The first, on my lips, means that I'll buy the stuff anyway, so I'm prepared: the second means I won't buy the stuff unless I really have to.

Since one's own reasoning is a basically set of rules of inference operating on a set of axiomatic beliefs, can one reliably prove one's own reasoning to be logically consistent from within one's own reasoning? Might not such reasoning itself be inconsistent? If our own reasoning were inconsistent, might not the logical consistency (validity) of such "proofs" as those of Godel's Incompleteness Theorems, be merely a mirage? How could we ever hope to prove otherwise? How could we ever trust our own "perception" of "implication" or even of "self-contradiction"?

This question raises a number of issues it is worth disentangling. It is far from clear that we should think of our reasoning as "operating on a set of axiomatic beliefs". That makes it sound as if there's a foundational level of beliefs (which are kept fixed as "axioms"), and then our other beliefs are all inferentially derived from those axioms. There are all sorts of problems with this kind of crude foundationalist epistemology, and it is pretty much agreed on all sides that -- to say the least -- we can't just take it for granted. Maybe, to use an image of Quine's, we should instead think of our web of belief not like a pyramid built up from foundations, but like an inter-connected net, with beliefs linked to each other and our more "observational" beliefs towards the edges of the net linked to perceptual experiences, but with no beliefs held immune from revision as "axioms". Sometimes we revise our more theoretical beliefs in the light of newly acquired, more observationally-based, beliefs...

Look at this inference: Premise 1: All desks have the same color. Premise 2: That desk is brown. Conclusion: All desks are brown. Now, I understand that this is a deduction. However, the conclusion is a generalization of one of the premises, and generalizations of premises are what one would expect in induction. Where did I go wrong?

True: In any situation in which both premisses are true, the conclusion has to be true too. So the displayed inference is deductively valid. [There are possible wrinkles here, but let's ignore them.] Also true: Inferring the conclusion from the second proposition alone would be be an inductive inference, and a very bad one at that. The first is a fact about the given two-premiss inference; the second is a fact about a different one-premiss inference. So there is no conflict there!

I have a question about Whitehead and Russell "Principia Mathematica". Can mathematics be reduced to formal logic?

Let's narrow the question a bit: can arithmetic be reduced to logic? If arithmetic can't be so reduced, then certainly mathematics more generally can't be. What would count as giving a reduction of arithmetic to logic? Well, We would need to give explicit definitions (or perhaps some other kind of bridging principles) relating the concepts of arithmetic to logical concepts. Otherwise we won't get arithmetical concepts into the picture at all. We would need to show how the theorems of arithmetic can in fact be derived from logical axioms plus those definitions or bridge principles. But what kind of definitions in terms of what sorts of concepts are we allowed at step (1)? And what kinds of logical principle can we draw on at step (2)? Suppose you think that the notion of a set is a logical notion. And suppose that you define zero to be the empty set, one to be the set of all singleton sets, two to be the set of all pairs, and so on. We can then define the...

Notation: Q : formal system (logical & nonlogical axioms, etc.) of Robinson's arithmetic; wff : well formed formula; |- : proves. G1IT is always stated in the form: If Q is consistent then exists wff x: ¬(Q |- x) & ¬(Q |- ¬x) but we cannot prove it within Q (simply because there is no deduction rule to say "Q doesn't prove" (there is only modus ponens and generalization)), so it's incomplete statement, I don't see WHERE (in which formal system) IS IT STATED. (Math logic is a formal system too.) In my opinion, some correct answer is to state the theorem within a copy of Q: Q |- Con(O) |- exists x ((x is wff of O) & ¬(O |- x) & ¬(O |- ¬x)) where O is a copy of Q inside Q, e.g. ¬(O |- x) is an arithmetic formula of Q, Con(O) means contradiction isn't provable...such formulas can be constructed (see Godel's proof). But I'm confused because I haven't found such statement (or explanation) anywhere. Thank You Very Much

Gödel's first incompleteness theorem applied to the arithmetic Q tells us that there is a corresponding Gödel sentence G Q such that, if Q is consistent, it can't prove G Q , and if Q satisfies a rather stronger condition (so-called omega-consistency) then Q can't prove not-G Q either. How do we establish the incompleteness theorem? There is a number of different arguments. But Gödel's original one depends on the very ingenious trick of using numbers to code facts about proofs. And he shows us how to construct an the arithmetical sentence G Q which -- read in the light of that coding -- "says" that G Q is not provable in Q. (So of course we don't want Q to prove G Q or it would prove a falsehood!) Now, Gödel's original proof of the incompleteness theorem, and all the textbook variants, are presented as nearly all mathematics is presented -- i.e. in informal mathematicians' German or English or whatever, with as much detail filled in as is needed to convince. And what's wrong...

Why is question-begging considered a fallacy when it embodies a deductively valid form of reasoning?

Perhaps this just reflects that the notion of fallacy (in the broad sense) is used in a fairly catch-all way. Let's say (as a first shot) that a fallacy is someflaw in the structure of an argument which prevents its givingrationally persuasive support for its conclusion. And let's distinguish that from the narrower idea of a deductive fallacy, a feature of a supposedly deductively valid inference which prevents its being so. Deductive fallacies are fallacies in the broad sense. But many fallacies in the broad sense are of course not deductive fallacies -- for example, fallacies in various kinds of inductive reasoning. The case of a "question-begging" argument like "P, therefore P", is another sort of case where the argument doesn't involve a deductive fallacy, but the structure prevents the argument giving any rationally persuasive support for its conclusion: so it is deemed to be a fallacy in the broad sense.

Are there any reasons to think that any one language is better suited to reasoning than another? Are there ways in which we could change our language in order to make reasoning easier, or more effective, or to make us less prone to common reasoning errors?

Well, it is certainly true that introducing unambiguous, very carefully defined, agreed terminology and having a perspicuous notation can make reasoning easier and make us less prone to common reasoning errors. To take the obvious example, mathematicians aren't just being awkward when they use a lot of symbolism and make very careful distinctions wrapped up into technical terms (and borrow from the languages of formal logic to make clear, for example, the 'scope' of their quantifiers). If proofs all had to be written out in unaugmented English, then we'd get lost following them, even in elementary high school algebra: and proof-discovery would be orders of difficulty harder. I suppose we might say "mathematicians' English" -- meaning English augmented with their new definitions and notational devices -- is a new, better, language, more suited to (mathematical) reasoning than street English. But equally, we might say that it is just one part of a single inclusive language, modern English: it is just a...

If the same proposition is derived from two different logical processes, are the answers still the same? Or to reverse the question, can the nature of the sub-premises or lower stages of logical reasoning yield the exact same conclusion? Thank you.

Why shouldn't two different chains of reasoning lead to one and the very same conclusion? Mathematicians often give different proofs of the same result. For example, Aigner and Ziegler's wonderful Proofs from the Book starts off with six proofs (chosen from many more) of the same proposition, i.e. Euclid's result that there is an infinite number of prime numbers. The very different routes to the same conclusion are illuminating, as they show up different connections between the fact that there is an infinite number of primes and other mathematical facts. But it is one and the same mathematical proposition that the different connecting proofs all home in on. I've chosen a mathematical example first because of the question's emphasis on "logical reasoning". But the point generalizes to cover other sorts of grounds we might have for accepting a proposition. I take it that Jill is in the coffee bar, as she has just phoned me and told me she is waiting there for me right now. You take it ...

If our brains evolved to be predisposed to logical fallacies like post hoc ergo propter hoc for beneficial reasons (for example, it has been suggested that susceptibility to post hoc ergo propter hoc aids in the learning of inferences), then might people be harmed if they are trained to overcome (even partially) these predispositions, as teaching them philosophy might do? Should tests be devised for the abilities that those logical fallacies enhance, so that there is a way to determine if training is harmful?

Philosophy departments like to tell themselves (and their funding bodies!) that the study of philosophy distinctively makes their students better all-round thinkers -- in the fashionable jargon, our courses deliver special "transferable skills". Actually, that strikes me as really a rather unlikely claim (at least if it means any more than that our students grow up, get more mature, learn not to jump to conclusions, learn how to write well-presented coherently organized papers, etc., which happens with pretty much any serious academically rigorous degree course). Anyone who has sat through scores of departmental meetings, listening to various bunches of philosophers trying to muddle through organizing their affairs, often making a complete hash of it, knows perfectly well that -- outside their research work -- even the best philosophers are no better at thinking straight and keeping their eye on the ball than anyone else. (And after those departmental meetings, are the pub conversations about...

Pages