# If I investigate the Goldbach conjecture by testing individual even integers to verify that they accord with it, do I have more reason to believe that the conjecture is true the more integers I verify? Or am I in just the same epistemic position regarding the conjecture whether I've verified one integer or a billion?

### As you clearly know, no

As you clearly know, no matter how many integers you have checked, that will always be a finite set, and so there will always be infinitely many integers you have not checked. Unless you had some reason to believe that a counterexample to Goldbach must be "low", then, it's hard to see why your checking a handful of cases should give you any more confidence that Goldbach is true. But there are some weird issues about how probability behaves in such cases, about which Timothy WIlliamson and others have written.

# In mathematics, it is commonly accepted that it is impossible to divide any number by zero. But I don't see why this necessarily has to be the case. For example, it used to be thought of impossible to take the square root of a negative number, until imaginary numbers were invented. If one could create another set of numbers to account for the square root of negatives, then what is stopping anyone from creating another set of numbers to account for division by zero.

### It's actually easy to invent

It's actually easy to invent a system of numbers in which division by zero is possible. Just take the usual non-negative rational numbers, say, and add one new number, "infinity". Then we can let anything divided by zero be infinity. Infinity plus or times anything is infinity. Infinity minus or divided by any rational is still infinity. We have a bit more choice what to say about infinity minus infinity or divided by infinity. But we can let those be infinity, too, if we like. So infinity kind of `swallows' everything else. (Oh, any rational divided by infinity should be 0.) Note, however, that many of the usual laws concerning multiplication and division now fail. For example, it's true in the usual case that, if a/b exists, then a = (a/b) x b. But (3/0) x 0 = infinity, not 3; of course, you can carve out an exception for 0, if you wish, but there's no way to make that work in all cases. This is not a fatal flaw, though. In the reals, a x a is always positive; not so when we add imaginary numbers. So...

# In writing mathematical proofs, I've been struck that direct proofs often seem to offer a kind of explanation for the theorem in question; an answer the question, "Why is this true?", as it were. By contrast, proofs by contradiction or indirect proofs often seem to lack this explanatory element, even if they they work just as well to prove the theorem. The thing is, I'm not sure it really makes sense to talk of mathematical "explanations." In science, explanations usually seem to involve finding some kind of mechanism behind a particular phenomenon or observation. But it isn't clear that anything similar happens in math. To take the opposing view, it seems plausible to suppose that all we can really talk about in math is logical entailment. And so, if both a direct and an indirect proof entail the theorem in question, it's a mistake to think that the former is giving us something that the latter is not. Do the panelists have any insight into this?

Anyone with any mathematical training will be familiar with the fact that proofs in mathematics do much more than just show that the statement proved is true. One way this manifests itself is that we often value different proofs of the same theorem. Thus, as Jamie Tappenden once pointed out, Herstein's Topics in Algebra , which was the standard algebra text when I was a student, contains three different proofs of the Stone Representation Theorem . Boolos, Burgess, and Jeffrey's Computability and Logic , one standard text for an intermediate logic course, similarly contains multiple proofs of several of the key results, including Church's Theorem on the undecidability of first-order logic and Goedel's First Incompleteness Theorem. And, oddly enough, I myself have just re-proven an existing result in a way that, I think, is clearly better. But not because the original proof wasn't convincing! It's an interesting question, though, why we value different proofs. Somehow, they seem to throw...
I probably should have noted before that, in the case of the different proofs of the first incompleteness theorem in Boolos, Burgess, and Jeffrey, the first proof they give is indirect or, as it is sometimes put, non-constructive: The proof shows us that, in any given consistent theory of sufficient strength, there is an "undecidable" sentence, one that is neither provable nor refutable by that theory; but the proof does not actually provide us with an example of an undecidable sentence. The second proof, which is closer to Gödel's own, is direct and constructive: It does give us such a sentence, the so-called Gödel sentence for the theory. By doing so, it gives us more information than the first proof. It shows us, in particular, the there will always be an "undecidable sentence" of a very particular form (a so-called Π 1 sentence). This is a good example of why constructive proofs are often better than non-constructive proofs: They often give us more information. But it does not directly...

# Dear philosophers, I really appreciate your website, which I just discovered! I'd like to make one comment regarding the recent questions about infinite sets on March 7 and March 14. In your responses (Allen Stairs and Richard Heck on March 14), you write that you do not know of any professional mathematicians who deny the existence of infinite sets. However, such mathematicians do indeed exist (although marginally). They are sometimes referred to as "ultrafinitists". One well-known living proponent of this view is Princeton mathematician Edward Nelson, also see http://en.wikipedia.org/wiki/Edward_Nelson and http://en.wikipedia.org/wiki/Ultrafinitism Specifically, one argument an ultrafinitist might use is that formal proofs are finite. Thus, although we might use the concept of infinite sets in our reasoning, there is no need to assume that infinite sets actually exist, because any mathematical statement could be preceded by the phrase "There is a finite proof of the statement that ..." I hope this...

What I said was: "It's important to distinguish two different issues: (i) whether there are infinitely many natural numbers; (ii) whether there are mathematical objects that are themselves infinite. And it is possible to accept that there are infinitely many natural numbers without accepting that there is a set of all of them or, more generally, that there are any objects that are, in their own right, infinite. And there are respected mathematicians who hold this kind of view, though they are definitely a minority." So I was not saying that no professional mathematicians would deny the existence of infinite sets. Indeed, Nelson was very much the sort of person I had in mind. He may well be an example of a mathematician who does not think that there is a largest natural number, but who does not think that there are any infinite sets. But I'm not absolutely sure about this, due to the fact that much talk of infinite sets can be coded in the sorts of weak theories that Nelson would accept. I think...

# Hi, I was hoping for some clarification from Professor Maitzen about his comments on infinite sets (on March 7). The fact that every natural number has a successor is only true for the natural numbers so far encountered (and imagined, I suppose). Granted, I can't conceive of how it could be that we couldn't just add 1 to any natural number to get another one, but that doesn't mean it's impossible. It seems quite strange, but there are some professional mathematicians who claim that the existence of a largest natural number (probably so large that we would never come close to dealing with it) is much less strange and problematic than many of the conclusions that result from the acceptance of infinities. If we want to define natural numbers such that each natural number by definition has a successor, then mathematical induction tells us there are infinitely many of them. But mathematical induction itself only proves things given certain mathematical definitions. Whether those definitions indeed...

I'm not familiar, either, with any working mathematicians who think there is a largest natural number or, more specifically, that there are only finitely many numbers. I do know of some work, by Graham Priest, that investigates finite models of arithmetical theories, but this is in the context of so-called paraconsistent logics. In Priest's theories, it is true that there is a greatest natural number, but it is also true that there isn't one! But that is probably not the kind of thing the questioner meant. Part of the reason mathematicians are happy with infinity is that infinity is very cheap. Consider, for example, ordered pairs. If you think (a) that, given any two objects, there is an ordered pair of them and (b) that there is an object that is not a pair, then it follows that there are infinitely many pairs. Or consider the English sentences. Not just the ones someone has uttered or written down, since there are ever so many English sentences no one happens to have uttered before (such, I am sure,...

# Do infinite sets exist? Most mathematicians say yes, but to me it seems like infinite sets can only exist if we use inductive reasoning but not deductive reasoning. For example, in the set {1,2,3,4,...} we can't prove that the ... really means what we want it to. No one has shown that the universe doesn't implode before certain large enough "numbers" are ever glimpsed, so how can we say they exist as part of an "object" like a set. We can only do this by assuming the existence of the rest of the set since that seems logical base on our experience. But that seems like a rather weak argument.

The argument here actually requires two more premises: (iv) that different numbers have different successors and (v) that 1 is not the successor of anything. If (v) failed, 1 could be its own successor and the only number. If (iv) failed, then 2 could be 1's successor and also its own. It's perhaps also worth noting that, although (ii)-(v) do imply that there are infinitely many numbers, it does not follow from them that there are sets that have infinitely many members. This is because (ii)-(v) say nothing about sets, and we cannot simply assume that there is a set containing all the infinitely many numbers. Analogues of (ii)-(v) hold in so-called hereditarily finite set theory, in which there are no infinite sets. (Indeed, one can consistently add the axiom "there are no infinite sets" to this theory.) Finally, the general observation that not everything can be proven does not imply that one can't reasonably ask that (ii)-(v) be proven, nor that one might not worry that, say, (iii) is to close to ...

# Is it possible for a mathematical equation to both be fundamentally unsolvable and also have a correct answer?

To answer this question properly, we would need to make some of the terms used in the question more precise. Math only works with precise definitions. But there is a natural way to do this, and it does bring us close to Gödel's work. A diophantine equation is any equation of the form: f(x,y,z...,w) = 0 where f is a polynomial (i.e., something like x 3 + 3x 2 y 2 + 4xy 3 ) and the question is: Is there an integral solution to the equation ? I.e., a way of assigning integers (positive or negative whole numbers, or zero) to x, y, z, ..., and w so that the equation comes out true? One very famous such equation is: x 7 + y 7 = z 7 This is what people call "Fermat's Theorem for 7". We now know that it has, indeed, no integral solutions, and the same goes for any other prime exponent except 2. Diophantine equations crop up all over mathematics. So, in a famous lecture in 1900, the great mathematician David Hilbert posed the question: Is there some general...

# Having an almost three year old daughter leads me into deep philosophical questions about mathematics. :-) Really, I am concerned about the concept of "being able to count". People ask me if my daughter can count and I can't avoid giving long answers people were not expecting. Firstly, my daughter is very good in "how many" questions when the things to count are one, two or three, and sometimes gives that kind of information without being asked. But she doesn't really count them, she just "sees" that there are three, two or one of these things and she tells it. Once in a while she does the same in relation to four things, but that's rare. Secondly, she can reproduce the series of the names of numbers from 1 to 12. (Then she jumps to the word for "fourteen" in our language, and that's it.) But I don't think she can count to 12. Thirdly, she is usually very exact in counting to four, five or six, but she makes some surprising mistakes. Yesterday, she was counting the legs of a (plastic) donkey (in natural...

Most of these questions are not so much philosophical as empirical, and there has been a tremendous amount of extremely important work done in the last few decades on children's concepts of number. The locus classicus is The Child's Understanding of Number , by Rachel Gelman and Randy Galistel, which was originally published in 1978, but this stuff really took off in the late 1990s or so. A lot of people have contributed to this work, but I'll mention two: Susan Carey and Liz Spelke , who are both at Harvard. You will find links to some of their work on their websites. Part of the reason people got interested in these issues is because they are closely related to issues about object recognition and individuation, which had been a focus of a great deal of work just before that. (I.e, people had been interested in the question at what age children start to "pick out" objects from the environment, and to think of them as distinct entities, that continue to exist even when you do not see them....

# Typical statements (first order) of the Peano Axioms puzzle me. Neither a mathematician nor logician, I find myself thinking the following: One would hope that arithmetic is consistent with the world as it is. So the axioms of arithmetic should be true in a domain containing the items that populate reality, e.g., a domain containing this keyboard upon which I now type. But this keyboard is neither identical to zero nor is it the successor (or predecessor) of any whole non-negative number. So what's with, e.g., (Ax)((x = 0 v (Ey)(x = Sy))? On what would think its intended interpretation, the axiom (theorem in some versions) seems false "of reality." And some other typical items of (first order) expositions seem either false or at least meaningless, e.g., (Ax)(Ay)(x + Sy = S(x + y)). What could be meant by "the sum of this keyboard and the successor of 6 is equal to the successor of the sum of this keyboard and the positive integer 6? Unless one has already limited the domain to exclude typical non...

You've pretty much answered your own question. There are two ways of thinking about this. On the first, the "domain" of the theory being axiomatized is taken to consist only of the natural numbers (i.e., the non-negative integers). So it is, in a way, like when the coach says to the driver, "Everyone is on the bus". She doesn't really mean that everyone is on the bus, only that everyone on the team, or whatever, is on the bus. We speak this way all the time. It's not exactly the same phenomenon, but it's close enough to get the idea. The second way, which you mention in connection with Tarski, is to introduce an explicit restriction to the natural numbers into the axioms. So let "Nx" be a predicate letter meaning: x is a natural number. Then the idea is to "relativize" the axioms to Nx: We replace each universal quantifier (∀x) by: (∀x)(Nx → ...); each existential quantifier (∃x) by: (∃x)(Nx & ...) So the addition axioms will take the form: (∀x)(Nx → x + 0 = x) (∀x)(∀y)(Nx & Ny → x +...

# For a long time I have been very concerned with clarifying mathematics, primarily for myself but also because I plan to teach. After decades of reading and questioning and thinking, it seems to me that the philosophical views of mathematics are nonsensical. What does it MEAN to question whether mathematical objects exist outside of our minds? It sounds absurd. It seems clear to me that mathematics is a science like all the others except that verification (confirmation) is different. It is the science of QUANTITY and its amazing developments and offshoots (like set theory). And all sciences are products of our minds. They are our constructions, as are most of the physical objects in our immediate worlds. Shoes, sinks, forks, radios, computers, computer programs, eyeglasses, cars, planes, airports, buildings, roads, and on ad nauseam, are ALL our constructions. Nature didn't produce any of them. We did. What does it MEAN to speak of a "PHYSICAL" circle? A circle is OUR IDEA of a plane locus...

"...[A]ll sciences are products of our minds. They are our constructions, as are most of the physical objects in our immediate worlds." That is no doubt true, but it misses a crucial point: Scientific theories are of course human creations. But what those theories are about are (generally) not human creations. People do not make quarks, atoms, or molecules, fields, stars or galaxies, bacteria, birds, or insects, etc, etc, etc. Nor is it up to us whether the theories we invent are true. And even when physicists discuss colliding billiard balls, the fact that the balls were made by us is neither here nor there. They are external objects, and it is not up to us how they will behave. Much the same is true of mathematical objects. I see no reason whatsoever to believe that numbers are a human creation, any more than tectonic plates are. And it is just a confusion to think that a circle is an idea. Surely whatever ideas I have are in my mind. Is mathematics supposed to be about the contents of my...