The AskPhilosophers logo.

Logic

I am reading a book that explains Gödel's proofs of incompletenss, and I found something that disturbs me. There is a hidden premise that says something like "all statements of number theory can be expressed as Gödel numbers". How exactly do we know that? Can that be proved? The book did give few examples of translations of such kind (for example, how to turn statement "there is no largest prime number" into statement of formal system that resembles PM, and then how to turn that into Gödel number). So the question is: how do we know that every normal-language number theory statement has its equivalent in formal system such as PM? (it does seem intuitive, but what if there's a hole somewhere?)
Accepted:
November 3, 2010

Comments

Peter Smith
November 5, 2010 (changed November 5, 2010) Permalink

Gödel's first incompleteness theorem says that any particular formalized theory T which contains enough arithmetic and satisfies some other modest conditions is incomplete.

The standard Gödelian proof depends on coding up T-formulae (including those T-formulae which are statements belonging to number theory) using the Gödel-numbering trick. And you can always do that if T is indeed a formalized theory. This just falls out of the conditions for counting as being properly -- in the jargon, 'effectively' -- formalized. In an effectively formalized theory, by definition, we can nicely and determinately regiment the strings of symbols that count as T-formulae and number them off.

But note: it is only required for the standard proof of T's incompleteness that we can code up T-formulae -- and hence code up any statements of number theory expressible in T. It is not claimed that we can code up "all statements of number theory", whatever that might mean. And you would be right to be pause over such a dubious claim (for surely we can't lay down once and for all what would count as a statement of any future 'number theory' in the way we can lay down once and for all, and hence number off, what would count as a wff of T's language, for a given formal theory T). But the dubious claim is not needed for Gödel's proof.

(The opening chapter of my Gödel book might help here!)

  • Log in to post comments

Richard Heck
November 6, 2010 (changed November 6, 2010) Permalink

Peter's explanation is as good as it could be, but let me elaborate on something. You will note that he keeps referring to the "standard proof" of the first incompleteness theorem. There are other proofs, and some of them do not rely upon a coding of this kind. Here's one nice way that stays very close to the "standard" proof. (There are also other, totally different sorts of proofs.)

First, prove incompleteness for a simple theory of syntax, that is, a theory that is actually about symbols: sentences and the like. We can do this by looking at a theory that talks about its own syntax. (Quine sketches such a proof at the end of his book Mathematical Logic.) This part of the proof will look almost exactly like the standard proof, but minus the coding. The reason no coding is needed is that the theory really is one about symbols,and we can define things like sequences and take proofs to be sequencesof formulas, etc, etc.

Second, establish the classical result that any theory that "interprets" an essentially incomplete theory (one that is not just incomplete but incompletable) is itself essentially incomplete. To "interpret" a theory Target in some theory Base is (more or less) to find some way to "define" the basic notions of Target in Base, and to do so in such a way that the axioms of Target then become theorems of Base. What this means, in a way, is that we can "translate" Target into Base in such a way that proofs in Target go over into proofs in Base: If you can prove some theorem A in Target, you can prove its translation in Base. There is no sort of coding here, either, since we are not actually doing anything in Target or Base; the argument is about Target and Base and is given in some "meta-theory" that can again be taken to be a fairly simple syntactic theory (not as simple as Quine's "protosyntax", though).

Third, show that your favorite arithmetical theory (say) is sufficient to interpret the syntactic theory used in the first step. How strong the result is will then depend upon how weak that syntactic theory can be taken to be. We know now that it can be taken to have the same strength as Robinson arithmetic, or Q, which is very, very weak. (Q is so weak that, though it proves that x + 0 = x, it doesn't prove that 0 + x = x!!) I think the theory Quine uses is just such a theory.

Even if we limit attention to the arithmetical case, however, there is a clear sense in which the "standard proof" does not really depend upon the details of how we code formulas by numbers. The proof works for any "reasonable" coding, where a "reasonable" coding is one that is itself "formal" in much the same sense in which we speak of the theory itself as being "formal". (Ultimately, these are all notions of recursion theory.)

Things are entirely different when we get to the second incompleteness theorem, which is the one that says that no consistent formal system meeting certain minimum requirements can prove its own consistency. This is for two reasons.

First, it is not at all clear what "proves its own consistency" is supposed to mean if the theory we are discussing is arithmetical. The theory proves things like "every number has a unique prime factorization", or "2 + 2 = 4". It does not talk about formulas, let alone about proof or consistency. Getting it to do so depends upon coding, so now coding is essential to the very statement of the theorem, not just to its proof.

Second, now it definitely does matter how we do the coding. This observation is due to Solomon Feferman, in his classic paper "Arithmetization of Metamathematics in a General Setting" (PDF). If you use a weird coding, then you can get the theory to prove what looks prima facie like the statement that it is consistent. Alternatively, and this was known earlier, you can define the relation "X is a proof of Y" in a funny way, and then it will be totally trivial for the system to prove that it is consistent. And this definition of "X is a proof of Y" will be correct, in the sense that, when the theory in question proves "X is a proof of Y", then it really is; and if it is, then it will prove that it is. But in another sense, the definition is bad: The defined predicate doesn't seem to mean "X is a proof of Y", even though it does have the right "extension". Rather, it means something like, "X is a proof of Y and no proof that is as short as it is is a proof of a contradiction". Now you can see why the proof of consistency is so trivial. Of course there's no proof of a contradiction if that means: There's no proof of a contradiction such that no proof that is as short as that one is a proof of a contradiction!!

All of this Feferman expresses by saying that the second incompleteness theorem is "intensional" in a way the first is not: It is bound up somehow with questions of meaning; we need to make sure that we define "X proves Y", etc, so that we get the right meaning. What on earth does that mean? And how did it get into mathematics? And how can we get it out again? Much interesting work has been done since Feferman's paper, by him and by others, on this sort of question: How can we best state the second theorem? What's the best proof? Just how far we can go towards removing the dependence on coding and such? Some very interesting work has been done recently by Albert Visser, especially in his paper "Can We Make the Second Incompleteness Theorem Coordinate Free?", available here. Though, I warn you, it is very technical.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/3643
© 2005-2025 AskPhilosophers.org