Are video games art? Many people claim that it's not, but video games seem very similar to story telling mediums such as film and literature, the only difference is that in some games, the player decides the story.

I take it the question ought to mean: Could a video game be the sort of thing to which it would be appropriate to respond as art? If that's the question, then I'd suppose the answer has to be "yes". Art imposes no restriction on its medium. Of course, this is a very different question from whether any actual video game is art. I rather doubt that. That said, however, it's not at all clear what we might mean by "art" here. There term often seems to be used as an honorific. People thus often seem to be concerned about whether rock music might be "art". If it can be, that seems somehow to legitimize it. One point the philosopher Ted Gracik nicely makes in some of his books on the aesthetics of rock is that this just isn't a very good question. A better question is whether (some) rock music might be a suitable object of aesthetic response and evaluation, and, I would add, how deep one's aesthetic engagement with it can be. Similar questions could of course be asked about video games.

Is there any number larger than all other numbers? George Cantor proved that that even infinite quantities may be smaller than other infinities. Still, might there be some infinite number that is greater than all other infinite numbers?

What Prof Pogge has said represents one perspective on this issue, but it involves assumptions that can be rejected. The central issue is whether you are prepared to speak of "how many sets there are". If so, then let Fred be how many sets there are, that is, the number of things that are sets. It is sufficiently clear that Fred is the largest number. In standard set theory, by which I mean Zermelo-Frankel set theory (ZF) and its extensions, there is no such thing as the number of things that are sets. There just isn't such a number. But there are other set theories in which there is such a number, and one can, in fact, consistently add to ZF an axiom known as HP which allows us to speak of (cardinal) numbers in a way different from how ZF by itself allows us to speak of them. And then there is a number of all the sets there are, and it is again the biggest number. How can the question how many sets there are simply fail to have an answer? The idea to which Prof Pogge is giving expression is...

Is it impossible that there be two recursive sets T and T* of axioms (in the same language) such that their closures under the same recursive set of recursive rules is identical and yet there is no recursive proof of this fact? It seems impossible but a simple proof of this fact would help elucidate matters!

I see the difference, and perhaps you are right about how the question should be interpreted. Of course, what makes it difficult to answer, in that form, is that the term "recursive", in "recursive proof", seems not to be doing any work. Perhaps what is meant is "finite proof" (as opposed to one containing infinitely many steps). But then, as you say, it is not clear what might be meant by asking whether some fact is or is not provable in some absolute sense, rather than in this theory or in that one. In that regard, it's perhaps worth pointing out explicitly that the argument you gave can be replicated for any extension U of PA. Let PA(U) be PA plus all statements of the form: n is not the Godel number of a U-proof of a contradiction. Then it will be provable in PA (and therefore in U) that PA and PA(U) have the same theorems iff U is consistent, and now the rest of the argument goes through. We can certainly start with theories even weaker than PA. I do not know how weak we can go.

Here are a couple ways of spelling out Peter's earlier remark. The first starts from the fact that S is a theorem of T iff T∪{¬S} is inconsistent. If T is a recursive set of axioms, then of course so is T∪{¬S}. To check if S is a theorem of T, then, you just need to see if T∪{¬S} has the same theorems as some known inconsistent theory. So if you could, in general, decide whether recursive theories prove the same theorems, you could decide whether an arbitrary sentence was a theorem of such a theory, which you can't, since there are undecidable theories. Note that this also shows that, if you could decide whether two theories have the same theorems, you could decide whether an arbitrary theory is consistent. But, in response, one might try restricting the claim to consistent theories. So begin instead with the fact that S is a theorem of T iff the theory T∪{S} has the same theorems as T. If T is recursive, so is T∪{S}. So if you could decide whether these prove the same theorems, you could again...

Perhaps what's confusing here is one of these each-all things that permeates this whole area. If T and T* have the same theorems, then EACH axiom of T will be provable in T*, and EACH axiom of T* will be provable in T, and of course it follows that there is, as a matter of fact, an algorithmic way of generating precisely those proofs. There is no general reason, however, to suppose that we will be able to prove that EVERY axiom of T is provable in T* nor that EVERY axiom of T* is provable in T. This is the same sort of contrast as between: PA proves EACH statement of the form: n is not a proof of 0=1; but PA does NOT prove: For all n, n is not a proof of 0=1.

David Hume famously pointed out that there seems to be a logical gap that prevents us from concluding "ought" from "is". It seem to me that the truth of this general observation is still under discussion. Does deontic logic shine any light on this question, as one would expect it to, or does the problem morph into the question which form deontic logic should take?

The question whether "is" implies "ought", in the most obvious form, is just the question whether: p --> Op, where "Op" means: p ought to be the case. We can consider deontic logics with and without that axiom, if we wish, and I suppose we might learn something from deontic logic about its consequences. But the formal study of deontic logic itself isn't likely to tell us whether we should accept that axiom, any more than the formal study of modal logic will tell us what principles concerning metaphysical necessity we should accept.

When proponents of Intelligent Design insist that it is inconceivable for a particular biological structure to have simply evolved, their opponents sometimes respond "evolution is cleverer than you are." This is a pithy response, and no doubt there is truth to it; but can the ID-proponent really be reasonably expected to accept this?

Whether ID proponents would accept the counter is not necessarily the best question. I would suggest we ask whether they should accept it, or what force it has. My own sense is that the charge that it is "inconceivable" how, say, the eye evolved is really quite lame. Suppose it true that it is utterly beyond the imagination of human beings how the eye might have evolved. So what? Surely there are plenty of things that are utterly beyond our imagining. That we can't figure it out in any detail, or even begin to do so, just doesn't show anything. One might ask why we should believe that the eye evolved, then. The answer, presumably, is that we have good evidence for evolution in general, that we can actually see it in action in simpler cases, and that one can tell some rough story about why and how primitive light-detection might have evolved, and even see a range of such sensory organs in actual organisms. Having any reasonable sense of how the eye, as it is, evolved over the eons isn't really...

Did Bertrand Russell or any of the logicists ever reply or address Goedel's incompleteness theorem directly?

I do not think Russell every addressed it directly, and Frege died before Gödel did his work. It is possible that some of the positivistic logicists, like Hempel, did, but not so far as I know. That said, incompleteness has been raised as a problem for contemporary forms of logicism, generally known as "neo-logicism". (For an introduction, see this paper of mine or this paper , by Fraser MacBride.) I think the response to this objection is pretty straightforward, however. Take neo-logicism to be the view that the truths of arithmetic (we'll focus on arithmetic) are all logical consequences of some core set of principles that, though not truths of logic in any sense now available to us, have some claim to be regarded as "conceptual" truths, or "analytic" truths, or something in this general vicinity. The incompleteness theorem tells us that there can be no algorithmically computable ("recursive") set of principles from which all truths of mathematics follow, if but only if we assume that...

Suppose a man, Frank, weighs 250 lbs. To some extent, whether or not we count Frank as fat will depend on context. If Frank stands only 5'3" then we might say he's fat; however, if Frank is 7'4" then quite clearly he is not fat. There are, of course, other factors to consider, too (e.g. muscle mass). With that said, it seems to me that we can tweak his height, muscle mass, etc., to the point where it's simply unclear whether Frank should count as fat or not, and neither empirical examination nor rigorous conceptual analysis will clear up the matter. There is ultimately a problem with our very notion of what it is to be fat--and there are many, many other similar cases of vagueness in our language. Does this inherent vagueness imply that there is no fact of the matter about whether Frank is fat? What about the cases where it seems so intuitively clear that Frank is fat (e.g. in possible worlds where he's only 5'3")?

Vagueness has been much discussed in recent years, and pretty much every possible view has been held. Let me just try to clarify a few things, and then I'll suggest some additional reading. First, I'm not absolutely sure, but the last few sentences seem to express a worry of the following form: If there's "a problem with our very notion of what it is to be fat", and if, therefore, "there is no fact of the matter about whether Frank is fat", then there will be such a problem even in "the cases where it seems so intuitively clear that Frank is fat". This kind of view is usually called "nihilism", and it certainly has been held. One form of it, which derives from Gottlob Frege, holds that predicates that exhibit this sort of vagueness are semantically defective, that is, not properly meaningful. But nihilism is a pretty desperate view, and most philosophers would regard it as a last resort. A more common view would break the train of thought here and say that we need to distinguish sorts of cases...