# If I investigate the Goldbach conjecture by testing individual even integers to verify that they accord with it, do I have more reason to believe that the conjecture is true the more integers I verify? Or am I in just the same epistemic position regarding the conjecture whether I've verified one integer or a billion?

### As you clearly know, no

As you clearly know, no matter how many integers you have checked, that will always be a finite set, and so there will always be infinitely many integers you have not checked. Unless you had some reason to believe that a counterexample to Goldbach must be "low", then, it's hard to see why your checking a handful of cases should give you any more confidence that Goldbach is true. But there are some weird issues about how probability behaves in such cases, about which Timothy WIlliamson and others have written.

# In mathematics, it is commonly accepted that it is impossible to divide any number by zero. But I don't see why this necessarily has to be the case. For example, it used to be thought of impossible to take the square root of a negative number, until imaginary numbers were invented. If one could create another set of numbers to account for the square root of negatives, then what is stopping anyone from creating another set of numbers to account for division by zero.

### It's actually easy to invent

It's actually easy to invent a system of numbers in which division by zero is possible. Just take the usual non-negative rational numbers, say, and add one new number, "infinity". Then we can let anything divided by zero be infinity. Infinity plus or times anything is infinity. Infinity minus or divided by any rational is still infinity. We have a bit more choice what to say about infinity minus infinity or divided by infinity. But we can let those be infinity, too, if we like. So infinity kind of `swallows' everything else. (Oh, any rational divided by infinity should be 0.) Note, however, that many of the usual laws concerning multiplication and division now fail. For example, it's true in the usual case that, if a/b exists, then a = (a/b) x b. But (3/0) x 0 = infinity, not 3; of course, you can carve out an exception for 0, if you wish, but there's no way to make that work in all cases. This is not a fatal flaw, though. In the reals, a x a is always positive; not so when we add imaginary numbers. So...

# What's the difference between saying that the burden of proof is on one's opponent, and simply saying that they are likely wrong? The idiom of "burden of proof" is used in a way that suggests that it's somehow different from ordinary, straightforward evaluations of evidence and arguments, but I can't think of what that difference could be.

### You often do hear people in

You often do hear people in philosophy say that the 'burden of proof' is on their opponent. And you sometimes hear people argue about who has the 'burden of proof'. I think that what this usually is about is which position is antecedently more plausible, or which position presently has the best arguments in favor of it. It's kind of like the game "King of the Hill". Whoever's on top of the hill is king, and someone else has to knock them off. Personally, I don't find this way of thinking about philosophical arguments very helpful. It's not that I don't think there is a 'truth' to these matters, but philosophical progress tends not to happen in a linear manner. The fact that something seems plausible today may not be a very good guide to whether it is true. More generally, I tend to think that understanding an issue is in a way more important than knowing how to solve it, so telling me that you've given an argument and now someone else has the 'burden of proof' just sounds gratuitous. You gave an...

# Some Christians claim to oppose homosexuality by saying, "hate the sin, not the sinner." Is this a meaningful distinction? Is it a cogent defense against accusations of homophobia?

### Yes and No. (I'm a

Yes and No. (I'm a philosopher. What did you expect?) Yes, it's a perfectly reasonable distinction. Suppose your sibling or parent or child (as makes the most sense to you) were to murder someone. I hope that you would find what they had done to be horrible and worthy of moral condemnation. But that doesn't mean that you have to think they are horrible. It doesn't mean that you should stop loving them, or stop supporting them. In fact, I myself think that it would be horrible and worthy of moral condemnation if you did stop loving them, or stop supporting them. So, when (right-wing) Christians say things like, "Hate the sin, love the sinner", that's what the sort of thing they mean: You can love this person , even if you think that they are doing bad things. We should all agree with that. But no, it's not, by itself , a cogent defense against accusations of homophobia. The reason it seems like this might be a 'defense' is that the (right-wing) Christians say that they don't condemn people ...

# Recently, I read an article about someone whose parents would purposely have sex in front of him when he was a young child. Many of the comments left in response to the article remarked that this amounts to child abuse. (For a less extreme example, it's commonly held that exposing young children to porn or graphic sex scenes is similarly inappropriate.) I agree that this sort of thing is egregious, but I don't know how to explain why. When the child is watching his parents have sex, what exactly is happening that harms him?

### Allen has already said a lot

Allen has already said a lot about this, so I'll just add a brief note. Early in the response, he says, "Imagine a society in which people live in close quarters and privacy is a luxury." We don't need to imagine such a society! Most human societies prior to the industrial revolution were like that! It used to be quite common for children to see adults having sex. It is, I'd suggest, no accident, that it was in Victorian times that people started to worry about children seeing such things: the "primal scene". (Yes, I'm talking to you, Freud.) I won't draw any moral conclusions from that. That would clearly be unjustified. But it does just point out that this is a very modern problem. It's really not at all obvious to me why it should be obvious to anyone else that seeing such a thing should be harmful to a child. Indeed, I can recall reading a 'parenting manual' some years ago that advocated having one's children sleep in the 'family bed'. Concerning the sort of issue raised here, the advice was: Love...

# On April 10, 2014, in response to a question, Stephen Maitzen wrote: "I can't see how there could be any law more fundamental than the law of non-contradiction (LNC)." I thought that there were entire logical systems developed in which the law of non-contradiction was assumed not to be valid, and it also seems like "real life" suggests that the law of non-contradiction does not necessarily apply to physical systems. Perhaps I am not understanding the law correctly? Is it that at most one of these statements is true? Either "P is true" or "P is not true"? or is it that at most one of theses statements is true? Either "P is true" or "~P is true"? In physics, if you take filters that polarize light, and place two at right angles to each other, no light gets through. Yet if you take a third filter at a 45 degree angle to the first two, and insert it between the two existing filters, then some light gets through. Based on this experiment, it seems like the law of non-contradiction cannot be true in...

Just for clarity, and not that Prof Rapaport needs me to tell him this, but it is important to distinguish the question whether contradictions can be true from the question whether one can get oneself into a situation in which one was believed . I rather suspect that we most or even all of us have contradictory beliefs of one sort or another, and that might motivate the view that classical logic is not a good theory of how we ought always to reason . But as Gilbert Harman famously pointed out, it isn't obvious that logic should be in the business of formulating norms of reasoning. Maybe what it does is simply study the notion of truth-preservation. So classical logic might be a good theory of validity, but not a good theory of how to reason, and maybe paraconsistent or relevance logics (or probabilistic analogues thereof) are better theories of the latter. For what it's worth, my own view is that Harman's point, though fundamentally correct, needs very careful handling and that, even in the...
I won't address the issue about physics, but yes: There are plenty of logical systems that allow for the possibility of true contradictions. For the most part, these are motivated by various sorts of paradoxes, such as the liar paradox (which has to do with truth) or the Sorities paradox (vagueness) or Russell's paradox (set theory). But there can be, and have been, deeper motivations, connected with questions about the limitations of human thought, and even Buddhist notions about the nature of ultimate reality. If you're interested in that sort of issue, have a look at Graham Priest's book In Contradiction or his more recent book Towards Non-Being , which is on a slightly different but related topic. I'll add that my own view is that contradictions cannot be true and that, even if they could, that would not help us solve the sorts of paradoxes I mentioned. But that doesn't mean such views aren't worth taking seriously. I could be wrong!!!

# Suppose I have never played a game of chess. If I now make the claim that I've won all the games of chess I've ever played, is that claim true, false, or undefined? A group of friends had an argument over this, and I figured that philosophers are deeply logical thinkers that can give us the answer and also to get a proper understanding of why the answer is what it is.

The claim is true. There is no game of chess that you have ever played and lost. That said, if you say that every game of chess you have ever played you have won, then you have said something very misleading . But that is different from saying something false. H.P. Grice started the development of a theory that would explain that difference.

# So I'm reading The Power of Logic, 4th edition. While on a section describing Modus Tollen it says that, Not A; If A, then B; So, Not B is an example of Modus Tollen. My question is how can that be if the conclusion of Modus Tollens is suppose to deny the consequent? Am i reading it wrong or just missing something? Keep in mine im still not beyond chapter 1.

There is either a typo in the book you are reading, or else you reported its contents wrong. Modus tollens is: If A, then B; not-B; so not-A. The version you reported is fallacious. It's a version of the fallacy of asserting the consequent.

# Is it possible for two tautologies to not be logically equivalent?

The term "tautology" has no established technical usage. Indeed, most logicians would avoid it nowadays, at least in technical writing. But when the term is used informally, it usually means: sentence (or formula) that is valid in virtue of its sentential (as opposed to predicate, or modal) structure. I.e., the term tends to be restricted to sentential (or propositional) logic. It is clear that Rapaport is assuming the sort of usage just mentioned: "a tautology is a 'molecular' sentence...that, when evaluated by truth tables , comes out true no matter what truth values are assigned to its 'atomic' constituents". Hence, on this definition, "Every man is a man" would not be a "tautology". Which is fine. It's logically valid, but not because of sentential structure. It is all but trivial to prove, as Rapaport does, that all tautologies are logically equivalent. In fact, however, it is easy to see that Rapaport's proof does not depend upon the restriction to sentential logic. One can prove ...

# This is a follow-up to Miriam Solomon's statement describing philosophy: "Philosophy involves more than deductive logic--it involves the exercise of "good judgment" which in fact we do not understand very well." (june 5, 2014) Can someone tell me more about what this "good judgment" is, please? I studied philosophy in college and I can't recall any of my professors ever suggesting that there was some elusive guiding principle in philosophy beyond what could be articulated...Instead, I was taught that it was about starting with premises and then executing deductive reasoning. Are you now saying that there's something mystical in there that philosophers can't articulate but which guides their work? That seems counter the way I learned philosophy, where the professors seemed particularly intent on articulating things clearly.

I'll just add that, for similar reasons, "good judgement" is equally important in mathematics, and nothing is more deductive than mathematics.