Advanced Search

When asked to choose between two competing theories, A and B, each of which fits the facts, people will sometimes resort to asking questions like, "Which theory is the more probable?" or "Which theory is simpler?" or even "Which theory involves the least upset to all my other beliefs?" Well, what about, "Which is the less weird theory?" Could weirdness (that is, something like distance from everyday experience) count as a good criterion on which to endorse one theory over another? Einstein seems to be appealing to some idea like this in the comment that God doesn't play dice. And would it be fair to say that many philosophers appeal to something like this when they reject panpsychism?

Philosopher John Haugeland once offered a sort of counterpart to Ockham's razor: "Don't get weird beyond necessity." (from "Ontological Supervenience," 1984, Southern Journal of Philosophy pp. 1—12.) Of course, the hard part is spelling out what weirdness amounts to and why it counts against a hypothesis. For example: Ockham's Razor tells us not to multiply entities beyond necessity: it stands in favor of parsimonious theories. Panpsychism is certainly weird, but from one point of view it's parsimonious: it says that there aren't actually two kinds of physical things (conscious and unconscious) but only one. Does the weirdness swamp the parsimony? If so why? So as a quick and dirty rule of thumb, "Pick the less weird theory" seems fine. As a serious methodological rule, it may need some work.

What is the panel's response to the philosophic community's ad hominem attacks on Rebecca Tuvel and her article in Hypatia? There was no engagement of her ideas at all, and the editors of Hypatia were forced to remove her article and publish an apology, merely because Ms Tuvel asked uncomfortable questions.

I just wanted to clear up an important point. The article was not removed. It is still in the journal, including the online edition, and it will stay there. I'd suggest reading this piece http://www.chronicle.com/article/A-Journal-Article-Provoked-a/240021 which gives a clearer picture of the review process of the journal itself. In particular, it makes clear that the associate editorial board doesn't make decisions about what gets published, and isn't involved in the day-to-day operation of the journal. I will leave it to others to discuss more substantive issues.

Many people, like myself, think of Ayn Rand when we think of philosophy, having read her books when young, etc. Coming from this sort of background, it was surprising to me, recently, to be told that the majority of professional philosophers don't regard her as a philosopher at all, or, if they do, take little notice of her. Is that truly the attitude amongst philosophers? If so, is there any particular reason for it? For instance, is it to do with resistance to ideas that come from outside the university?

I don't know if most philosophers would say that she's no philosopher at all, but I suspect many would say she's a marginal philosopher. One reason is that however influential she may have been, many philosophers don't think she's a very good philosopher—not very careful or original or analytically deep—even if they happen to be broadly sympathetic to her views. The fact that she came from outside the academy by itself wouldn't be disqualifying, but in one sense, philosophers are not just people who engage with philosophical issues; they're people who are part of a community whose members read and respond to one another (even when they disagree deeply) and interact in a variety of particular ways. Being outside the academy tends to put you outside the ongoing conversation of that community. Whether that's good, bad, or neutral is another story, but to whatever extent "philosopher" means "someone who's a member of a certain intellectual community," the fact that she was outside the academy is part,...

It seems to me that there are two kinds of numbers: the kind that the concept of which we can grasp by imagining a case that instantiates the concept, and the kind that we cannot imagine. For example, we can grasp the concept of 1 by imagining one object. The same goes for 2, 3, 0.5 or 0, and pretty much all the most common numbers. But there is this second kind that we cannot imagine. For example, i (square root of -1) or '532,740,029'. It seems to me that nobody can really imagine what 532,740,029 objects or i object(you see, I don't even know whether I should put 'object' or 'objects' here or not because I don't know whether i is single or plural; I don't know what i is) are like. So, Q1) if I cannot imagine a case that instantiates concepts like '532,740,029', do I really know the concept, and if so, how do I know the concept? Q2) is there a fundamental difference between numbers whose instances I can imagine and those I cannot? (I lead towards there is no difference, but I don't know how to account...

I'd suggest that while there may be differences in how easy it is for us to "picture" or "imagine" different numbers, this isn't a difference in the numbers themselves; it's a rather variable fact about us. I can mentally picture 5 things with no trouble. If I try for ten, it's harder (I have to think of five pairs of things.) If I try for 100, it's pretty hopeless, though you might be better at it than me. But I'm pretty sure that there's no interesting mathematical difference behind that. I'm also pretty sure that I understand the number 100 quite well. I don't need to be able to imagine 100 things to be able to see that 2x2x5x5 is the prime factorization of 100, for example, nor to see that 100 is a perfect square. But that may still be misleading. I have no idea offhand whether 532,740,029 is prime. But I know what it would mean for it to be prime -- or not prime. And in fact, a bit of googling for the right calculators tells me that 532,740,029 = 43 x 1621 x 7643 I can't verify that by doing the...

Is it morally acceptable to hate a crime but not the criminal?

I'm having a bit of trouble understanding why it wouldn't be. Among possible reasons why it would be just fine, here are a few. 1) People, and more generally sentient beings, occupy a very different place in the moral universe than mere things (including events and abstract ideas). Moral notions don't even get a grip unless they refer back one way or another to beings as opposed to things. There's simply no reason to think that our attitudes toward people should be in lock-step with out attitudes toward non-sentient things. 2) Moreover, you might think that hating people is almost always not a good thing. It makes it harder to see their humanity, it makes you more likely to treat them less fairly, it fills you up with emotional bile. Hating a crime might not be emotionally healthy either, but given the distinction you're interested in, it's not personal; it's strong moral disapproval of a certain kind of action, and that might be both appropriate and productive. 3) Suppose someone you care...

When I read most discussions about free will, it seems that there is an implicit unspoken assumption that might not be accurate once it is brought forward and addressed explicitly. We know from research (and for me, from some personal experiences) that we make decisions before we are consciously aware that we have made that decision. The discussions about free will all seem to assume that one of the necessary conditions of free will is that we be aware that we are exercising it, in order to have it. (sorry if I did not phrase that very well). In other words, if we are not consciously aware that we are exercising free will in the moment that we are making a decision, then it is assumed that we do not have free will, merely because of that absence of conscious awareness. Suppose we do have free will, and we exercise it without being consciously aware that we are doing so at that particular moment. That might merely be an artifact that either we are using our awareness to do something that requires...

Part of the problem with this debate is that it's not always clear what's really at issue. Take the experiments in which subjects are asked to "freely" choose when to push a button and we discover that the movement began before the subject was aware of any urge to act. The conclusion is supposed to be that the movement was not in response to a conscious act of willing and so wasn't an act of free will. But the proper response seems to be "Who cares?" What's behind our worries about free will has more or less nothing to do with the situation of the subjects in Libet's experiment. Think about someone who's trying to make up their mind about something serious—maybe whether to take a job or go to grad school. Suppose it's clear that the person is appropriately sensitive to reasons, able to reconsider in the light of relevant evidence and so on. There may not even be any clear moment we can point to and say that's when the decision was actually made. I'd guess that if most of us thought about it, we'd...

Lately, I have been hearing many arguments of the form: A is better than B, therefore A should be more like B. This is despite B being considered the less desirable option (often by the one posing the argument). For example: The poor in our country have plenty of food and places to live. In other countries, the poor go hungry and have little to no shelter. It is then implied that the poor in our country should go hungry and have little to no shelter. I was thinking this was a fallacy of suppressed correlative, but that doesn't quite seem to fit. What is the error or fallacy in this form of argument? How might one refute such an argument?

Years ago, I used to teach informal reasoning. One of the things I came to realize was that my students and I were in much the same position when it came to names of fallacies: I'd get myself to memorize them during the term, but not long after, I'd forget most of the names, just as my students presumably did. Still, I think that in this case we can come up with a name that may even be helpful. Start here: the conclusion is a complete non sequitur ; it doesn't even remotely follow from the premises. How do we get from "The poor in some countries are worse off than the poor in our country" to "The poor in our country should be immiserated until they are as wretched as the poor in those other countries"? Notice that the premise is a bald statement of fact, while the conclusion tells use what we ought to do about the fact. By and large, an "ought" doesn't simply follow from an "is", and so we have a classic "is/ought" fallacy. However, pointing this out isn't really enough. After all, in some cases...

If the basis of morality is evolutionary and species-specific (for instance, tit for tat behaviour proving reproductively successful for humans; cannibilism proving reproductively successful for arachnids), is it thereby delegitimised? After all, different environmental considerations could have favoured the development of different moral principles.

There's an ambiguity in the words "basis of morality." It might be about the natural history of morality, or it might be about its justification. The problem is that there's no good way to draw conclusions about one from the other. In particular, the history of morality doesn't tell us anything about whether our moral beliefs and practices are legitimate. Even more particularly, the question of how morality figured in reproductive success isn't a question about the correctness of moral conclusions. Here's a comparison. When we consider a question and try to decide what the best answer is, we rely on a background practice of reasoning. That practice has a natural history. I'd even dare say that reproductive success is part of the story. But whether our reasoning has a natural history and whether a particular way of reasoning is correct are not the same. modus ponens (from "A" and "If A then B," conclude "B") is a correct principle of reasoning whatever the story of how we came to it. On the other...

Is there any problem, moral or otherwise, in mixing money and enlightenment? For instance, asking people to pay spiritual guidance. Should philosophers receive a salary?

Even spiritual teachers have to eat. One might be suspicious of someone who withheld "enlightenment" unless the seeker paid, though in many traditions, support for spiritual guidance comes from voluntary donations. Whatever one thinks about people who explicitly claim to be providing spiritual help, counselors and psychotherapists offer something that's at least in a ballpark not too many miles away. For instance: there are interesting similarities between what one might learn from Buddhist practice and from cognitive behavioral therapy. I, for one, would be puzzled if someone thought a therapist shouldn't charge for her services. Exactly how the lines get drawn here and what, if anything, underlies the difference is an interesting question. If gurus shouldn't be paid, should doctors? How about artists? After all, insofar as I count any of my own experiences as spiritual, some of the more profound ones came from paintings, works of literature, pieces of music. In any case, I'd suggest caution about...

Is there a particular philosophical discipline that deals with large numbers of people doing something innocuous, but having a deleterious effect on a much smaller number of people? If so, does it have a name? Like blame-proration, guilt-apportionment, or anything? Thanks!

Perhaps an example would help, but I think I have the idea. We might want to start by modifying your description a bit. You wrote of large numbers of people doing something innocuous but having a bad effect on a small number of people. If you think about it, however, that means the word "innocuous" isn't really right. And so I'm guessing you have something like this in mind: there's a certain sort of action (call it X-ing ) that large numbers of people perform that has something like this handful of features. First, it doesn't harm most people at all. Second, though X-ing is potentially harmful to some people, the harm would be minimal or maybe even non-existent if only a few people X-ed, and only occasionally. Third, however, enough people actually do X that it causes palpable harm to the small minority. And given your suggested terms ("blame-proration," "guilt-apportionment") I take your question to be about just how culpable the people who X actually are. If that's right, it's a nice...

Pages