So, it's my understanding that Russell and Whitehead's project of logicism in the Principia Mathematica didn't work out. I understand that two reasons for this are (1) that some of their axioms don't seem to be derivable from pure logic and (2) Gödel's incompleteness theorems. However, particularly since symbolic logic and the philosophy of mathematics are not my area, it's hard for me to see how 1 & 2 work and defeat the project.

I agree with Richard's and Alex's general remarks about "logicism" and what counts as "logical". It would indeed be far too quick to reject every form of logicism just because it makes the existence of an infinite number of objects a matter of "logic". Still, it is perhaps worth reiterating (as Richard indeed does) that Principia gets its infinity of objects by theft rather than honest toil: it just asserts an infinity of objects as a bald axiom rather than trying to conjure them out of some more basic logical(?) principles in a more Fregean way. So I'd still want to say that, whatever the fate of other logicisms, Russell and Whitehead 's version -- given it is based on theft! -- can't really be judged an honest implementation of the original logicist programme as e.g. described in the Principles , even prescinding from incompleteness worries. But for all that, three cheers for Principia in its centenary year!

In the Principles of Mathematics, Russell boldly asserts "All mathematics deals exclusively with concepts definable in terms of a very small number of logical concepts, and ... all its propositions are deducible from a very small number of fundamental logical principles." Principia , a decade later, is an attempt to make good on that programmatic "logicist" claim. Now, one of the axioms of Principia is an Axiom of Infinity which in effect says that there is an infinite number of things. And you might very well wonder whether that is a truth of logic . (If someone thinks the number of things in the universe is finite, are they making a logical mistake?) Another axiom is the Axiom of Reducibility, which I won't try to explain here, but which is even less obviously a logical law -- and indeed Russell himself argued that we should accept it only because it has nice mathematical consequences in the context of the rest of Principia's system. Still, there is some room for...

Ok, I'm going to go at Godel backwards. I'm going to start from the fact that the universe exists (whatever others may think to the contrary). I'm assuming that the universe is ruled by law. It also seems to me that the universe can't contain any self-contradictions, or it wouldn't exist in the first place. So, its laws are consistent. For a similar reason, they must be complete; if some key part was missing, the universe wouldn't exist. This line of reasoning seems to lead me to: the laws of the universe are both consistent and complete. I know that Godel was talking about formal systems, but it just seems to me that the laws of the universe are *the* formal system. So, there is at least one example of a formal system that is both consistent and complete, whether or not we can articulate it. Or have I completely missed Godel's idea here? Thanks, JT

A formal system (of the kind to which Gödel's incompleteness theorem applies) is a consistent axiomatized theory which contains a modicum of arithmetic and is such that it is mechanically decidable whether a given sentence is or isn't an axiom. Why should we think that the "laws of the universe" can be encapsulated in a formal theory in that sense? Why suppose that all the laws can be wrapped up into a single formal system? It isn't at all obvious why that should be so: maybe the laws of the universe are so rich that they always elude being pinned down by a single formal axiomatic system (such that it is mechanically decidable what's an axiom). Indeed, we might say that Gödel's incompleteness theorem shows that, on a broad enough understanding of "laws of the universe", the laws can't be so pinned down. For any given formal theory, there will be arithmetical truths that particular formal system can't prove -- so the arithmetical laws of the universe, for a start, run always run beyond...

I have recently stumbled upon a short book written by the Catholic theologian named Peter Kreeft. He deductively argued for Jesus’ divinity through an approach he summarized as “Aut deus aut homo malus.” (Either God or a Bad Man.) Basically, his argument works only on the assumption made by most historians. Jesus was a teacher, he claimed divinity, and was executed. So, assuming this is true he says Jesus must’ve been one of three things. One possibility is that he was a liar. He said he was divine even though he knew it was not true. Another possibility is that he was insane. He believed he was divine even though he wasn’t. The final possibility is that he was telling the truth and he was correct. He was divine. He goes through and points out that Jesus shows no symptoms of insanity. He had no motive for lying. In fact, he was executed because of his claims. That gives him a motive to deny his divinity, which he apparently was given a chance to do by according to the Jewish and Roman sources on the...

Charles Taliaferro's third sentence could be read as saying that I "give no credence to theism". If that's what he means, he presumes too much. What I give no credence to are bad arguments for theism.

I agree with Alexander George: the argument is hopeless. As it happens, I came across the argument for the first time only recently: and -- when I'd stopped laughing -- I blogged about it, rather rudely. You can read what I said, and 33 comments(!) here .

I'm a first year student of philosophy at UCLA, and I am interested primarily in philosophy of religion. I've just taken an introductory logic course which covered symbolization, sentential logic, and quantification. There are numerous other logic courses offered through the department, including metalogic, modal logic, etc, and I was wondering if AskPhilosophers could recommend a logic course to take? More specifically, I want to take a logic course that is related or will aid me in my studies in philosophy of religion. Maybe modal logic, since it deals with necessity and possibility? Thanks.

The short answer is: yes, you are right, a course on modal logic would be the one that probably will relate a little to a philosophy of religion course (it will help you understand e.g. modal ontological arguments). But I think it is worth saying a bit more. I'd be a little worried if one of my first-year students said "I'm primarily interested in the philosophy of X " for any X . After all, philosophy is a subject where topics don't compartmentalize easily but connect up in deep and unexpected ways. Beginners should be exploring widely, and leaving themselves open to being gripped by all kinds of problems -- what I like at this stage is a student who says "the philosophy of Y is really exciting: that's what I want to do " one week, and then comes back three weeks later and says "wow, this philosophy of Z course is amazing". And I'd be particularly worried if someone focussed too hard too early on a small area of applied philosophy like the philosophy of religion. This is a...

Is it possible to prove that something cannot be derived (considering only well-formed-formulas) in a natural derivation system? I mean a premise P cannot yield the conclusion Q since there isn't any logical rule that justifies the inference but how can someone prove this?

We can also sometimes prove non-derivability results by purely "combinatorial" arguments. Here's a well-known toy example, due to Douglas Hofstadter. Consider a derivation system which uses just the symbols M, I, and U which can be combined to produce strings of symbols in any way you like, e.g. MI, UMIIM, IUUUUU, etc. The rules of our system are as follows: If a string ends in I, you can add a U to the end. For example, from MII you can "infer" MIIU. If a string starts M, you can "double" the rest of the string (i.e. change Mx, to Mxx). For example, from MIUI you can infer MIUIIUI. You can replace any occurrence of III with a U. For example, from MUIIIU you can infer MUUU. You can delete any occurrence of UU. For example: from MIUUUM you can infer MIUM. Question: Can you start with MI as an "axiom" and derive MU, using the rules 1 to 4? Answer: You can't. I'll leave you to work out why that is so (or to Google the proof!). But here's a hint and a comment. The hint is:...

Friedrich Nietzsche introduced the idea of eternal recurrence based on the supposition that if there is only a finite amount of matter in the universe, there are only a finite number of arrangements of that matter, so if time is infinite, each arrangement of matter will be repeated an infinite number of times. Is this argument logically sound? Thanks, kal

Obviously not! Imagine a world with just three particles in it (not in a straight line). One particle stays fixed, the other two move slowly apart for ever (along the line joining them). The arrangement of the three particles in this finite-matter world -- the size and shape of the triangle formed by the particles -- is always changing and never repeats.

Can you possibly suggest any good philosophical resources for the study of logic? Concerning validity, soundness, paraphrasing and diagramming. I am studying philosophy at Uni and am struggling alot with just the introduction of the module and need some extra help as even the text books offered seem a little complex for me.

Any university library will have dozens of elementary textbooks in its collection with titles like "Logic", "Formal Logic", "Elementary Logic", "Beginning Logic", "The Logic Book", etc. etc. The best advice is probably just to quickly browse through the opening chapter or two of a whole pile of them, till you find one that works for you. But here are three specific suggestions. I often recommend Samuel Guttenplan's The Languages of Logic for beginners who are struggling. Another quite excellent resource -- and freely downloadable -- is Paul Teller's A Modern Formal Logic Primer , available at http://tellerprimer.ucdavis.edu/ And then I have to confess I do still rather like P*t*r Sm*th's An Introduction to Formal Logic (though use the heavily corrected 2009 reprint).

Hello. This is a question for the philosophers of mathematics or the logicians. I have heard that first order logic is complete, and that second order logic is incomplete. The completeness of first order logic I have seen characterized as the fact that every true proposition (in the semantic sense) is also provable (in the syntactic sense). I've also heard that the completeness at stake in both cases is not the same, but it has never been clear to me in what they differ. Supposedly second order logic, having more expressive power, has enough resources to express arithmetic and thus the first incompleteness theorem applies to it, but that theorem says of such systems that they are incomplete. But I also have heard some people (or maybe I have misheard them) discussing such incompleteness in the same terms, that is, as saying that not every true theorem of such systems is provable, though the converse is true (they are sound). I am no logician, so I would appreciate firstly, if someone can point out any...

Any good textbook that covers second-order logic should in fact clearly answer your question. Here, though, is a summary answer. An inference in the formal language L from the set of premisses A to the conclusion C is valid if every interpretation of L (that respects the meaning of the logical operators) which makes all the members of A true makes C true too. A deductive proof system S for sentences in the formal language L is complete if, for every valid inference from a set of premisses A to the conclusion C there is deduction in the system from (some of) the premisses in A to the conclusion C. If L is a first-order language, then there is a deductive system S1 which is complete in the sense defined ("first-order logic", meaning any standard deductive system for first-order logic, is complete). If L is a second-order language, with the second-order quantifiers constrained to run over all the subsets of the domain, then there is no deductive system S2 which is complete in the same sense (any...

What are the defenses to the attacks on the law of non-contradiction. In other words, what is the traditional philosophical orthodoxy's response to developments in paraconsistent logics (Graham Priest's "Doubt Truth to be a Liar" or "In Contradiction", etc.)?

This does sound a bit like a question asking for help with a student paper, which isn't really the role of this site: and certainly this sort of techie question doesn't lend itself to a snappy answer here. So just two comments. First, paraconsistent logics don't have to attack the law of non-contradiction -- i.e. paraconsistent logics don't have to say there are true contradictions (dialetheias): see here for more explanation. Second, for some defences of the law of non-contradiction, see the papers collected in Part V of Priest, Beall and Armour-Garb (eds) The Law of Non-Contradiction .

I have heard that Gödel Proved that Arithmetic cannot be reduced to logic or formal logic. Although I have read explanations which basically state that arithmetic is not complete and thus not definitional like in formal logic, I cannot get my head around how 1+1=2 is NOT reducible to formal logic. This seems like an obvious analytic statement in which "one and one" is the same as saying "two". Can anyone shed light on this?

Well, there is a logical truth in the vicinity of 1 + 1 = 2. Or perhaps better, a whole family of logical truths. Fix on a pair of properties F and G . Then it is a theorem of first-order logic that if exactly one thing is F and one thing is G and nothing is both F and G , then are exactly two things are either- F -or- G . Here the numerical quantifiers 'exactly one thing is' and 'exactly two things are' can be defined in standard ways from the ordinary first-order quantifiers and identity. And the theorem holds whatever pair of properties we choose. This elementary logical result probably captures what is driving your intuition that in some sense 1 + 1 = 2 is "reducible to formal logic". (For a bit more on this sort of thing, see my Intro to Formal Logic §33.3 -- or any other standard logic text!) But all that is quite compatible with Gödel's first incompleteness theorem. For Gödel's theorem isn't about some limitation or incompleteness in our ability to prove ...

Pages