Do infinite sets exist? Most mathematicians say yes, but to me it seems like infinite sets can only exist if we use inductive reasoning but not deductive reasoning. For example, in the set {1,2,3,4,...} we can't prove that the ... really means what we want it to. No one has shown that the universe doesn't implode before certain large enough "numbers" are ever glimpsed, so how can we say they exist as part of an "object" like a set. We can only do this by assuming the existence of the rest of the set since that seems logical base on our experience. But that seems like a rather weak argument.

We can use mathematical induction to prove that (i) infinitely many natural numbers exist from the premise that (ii) 1 is a natural number and the premise that (iii) every natural number has a successor. Although it's called mathematical "induction," it's actually deductive reasoning. I take it that (ii) is beyond dispute, and (iii) is at any rate very hard to deny! It won't do to demand proof of (ii) or (iii) before accepting this proof of (i), for if the premises in any proof must themselves have been proven, then we have an infinite regress: nothing could be proven in a finite amount of time. We've therefore proven that infinitely many natural numbers exist. The notation "{1,2,3,4,...}" is just one way of referring to the set containing all and only those infinitely many numbers. It's perhaps a fallible way of referring to that set, because it assumes that the audience knows which number comes next in the series. A more reliable way of referring to the set is "the set whose members are the...

Euclid in "Elements" wrote that "things which equal the same thing also equal one another." Is this true in all cases? I've read that it is only true for "absolute entities," but not to "relations," although I do not understand this exemption. Are there any examples of things that are equal to the same thing but not to one another? Are relations really exempt from Euclid's axiom, and if so, why?

If by the adjective "equal" Euclid means "identical in magnitude" (which I gather is what he does mean), then his principle follows from the combination of the symmetry of identity and the transitivity of identity . The symmetry of identity says that, for any x and y , x is identical to y if and only if y is identical to x . The transitivity of identity says that, for any x , y , and z , if x is identical to y and y is identical to z , then x is identical to z . Therefore, Euclid's principle has exceptions only if the symmetry of identity sometimes fails or the transitivity of identity sometimes fails. But I don't think either of them ever fails. Now, some relations that are similar to the identity relation aren't transitive. I might be (1) unable to tell the difference between color swatches A and B, (2) unable to tell the difference between swatches B and C, yet (3) able to tell the difference between swatches A and C. But...

I've recently read that some mathematician's believe that there are "no necessary truths" in mathematics. Is this true? And if it is, what implications would it have on deductive logic, it being the case that deductive logical forms depend on mathematical arguments to some degree. Would in this case, mathematical truths be "contingently-necessary"?

Your question is tantalizing. I wish it had included a citation to mathematicians who say what you report them as saying. On the face of it, their claim looks implausible. Are there no necessary truths at all? If there are necessary truths, how could the mathematical truth that 1 = 1 not be among them? One way to hold that mathematicians seek only contingent truths might be as follows. If some philosophers are correct that propositions are to be identified with sets of possible worlds, then there's only one necessarily true proposition, because there's only one set whose members are all the possible worlds there are. That single necessarily true proposition (call it "T") will be expressed by indefinitely many different sentences , including the sentences "1 = 1" and "No red things are colorless," and it will be contingent just which sentences express T. On this view, mathematicians don't try to discover various necessary truths, since there's just one necessary truth, T. ...

Is it possible for a mathematical equation to both be fundamentally unsolvable and also have a correct answer?

I hope philosophers of math on the Panel will respond with more authority than I have. My understanding is that G ö del showed that arithmetic contains pairs of mutually contradictory statements neither one of which is provable within arithmetic. Assuming the standard logical law that exactly one of every pair of mutually contradictory statements is true, we get the result that some arithmetical truths are unprovable within arithmetic. I can't say whether those truths include statements to the effect that such-and-such is the solution to an equation, but if they do, and if their being unprovable within arithmetic makes the associated equations "fundamentally unsolvable," then the answer to your question is yes . Someone might reply that an unprovable arithmetical statement can't be true , but I think that would be to mistake truth for provability.

Is mathematics grounded in logic or is logic grounded in mathematics?

I leave it to the experts on the Panel (and there are several) to give you a proper answer, but I would certainly reject the second of your alternatives: I can't see how logic could be grounded in mathematics. It's a more controversial issue whether mathematics is grounded in logic and, if it is, what that grounding amounts to.

The equality x-x=0 and 0=x-x are suppose to be the same. The first equality is easy to understand while the second equality( 0=x-x )is somewhat mind boggling to me for the following reason: where do the 2x's on the right side come from? Thanks Kal

Assuming I understand your question: They come from the same "place" in each equation, namely, from anywhere at all. It might help to think of it this way: "What's the result of subtracting any magnitude at all from itself? Zero." "What's zero? The result of subtracting any magnitude at all from itself." Each answer is just as good as the other in answering the respective question being asked.

I know some philosophers think numbers exist, and some others think the opposite. Do some of you think that this question is or may be "undecidable"? I mean, perhaps both the idea that numbers exist and the idea that numbers don't exist are consistent with all other things that we believe (do not contradict any one of them). Do you think this might be right?

Not really my area, but until someone else responds... I can see why you'd be tempted to think so. If numbers -- standardly understood as abstract objects -- exist, they're causally inert, and so they can't affect the world in any way. But I'm not sure that implies that their existence is just as compatible as their non-existence is with everything else we believe. It's highly plausible that numbers are essentially noncontingent: they exist necessarily if they exist at all. The concept of number doesn't seem to be a concept that could be instantiated only contingently. So, given common modal assumptions, it's either necessarily true that numbers exist or else necessarily false that numbers exist. Whichever one of those it is, then, the other one is impossible and hence inconsistent with everything we believe. Now, we might never be able to discover that inconsistency, and so the question whether numbers exist might be undecidable in that sense. But I'd be surprised if it were...

Pages