Recent Responses

As all logical arguments must make the assumption that the rules of logic work, is there any way to derive the laws of logic?

As you suggest, all logical arguments (and hence all derivations) depend at least implicitly on laws of logic. So I can't see any way of deriving any law of logic without relying on other laws of logic. Nevertheless, we can derive every law of logic, provided we're allowed to use other laws of logic in our derivation. We needn't fret about our inability to derive a law of logic while relying on no laws of logic, because the demand that we do so is simply incoherent.

If you had a child to make yourself happy, as most people do, would that violate the Kantian imperative to avoid treating people as means?

Unfortunately, this is a tricky question for Kantian ethics to address.

On its face, it might appear that procreation (bringing a child into existence) in order to advance one’s own happiness treats the child merely as a means: One ‘uses’ the child to promote one’s own happiness.

But things get more complicated once we attend to exactly what this Kantian imperative says. The Kantian moral requirement you mention states that we are not to treat “humanity” merely as a means. There are debates as to exactly what Kant had in mind by “humanity” but the standard view is that “humanity” means the capacity for rational agency — the ability to choose our ‘ends’ (our goals or objectives) and the best means to those ends. But a newborn lacks “humanity” in this sense; it cannot choose ends for itself, etc. Nor can a fetus. All the more, a child who does not yet exist does not have humanity! Hence, it would appear that the apparent answer to your question is ‘no’: You cannot treat someone’s capacity for rational agency merely as a means if they simply do not have such a capacity.

This is (I suspect) the orthodox Kantian answer. But this answer has counterintuitive implications. For if wronging someone involves treating their humanity merely as a means, then it would seem impossible to wrong anyone without humanity. So it would be impossible to wrong a child at all – and that doesn’t seem correct. Surely it’s possible to wrong children by treating them merely as a means (having a child purely for the sake of, say, harvesting its organs to save others’ lives). So how might Kantians arrive at a more intuitively plausible answer?

The most promising route, in my estimation, is to cast doubt on an assumption on which the orthodox Kantian answer seems to depend. That assumption, very roughly, is that if it would be wrong to treat someone in a particular way because she has property F, then she must have property F at the very time the mistreatment occurs. This assumption is, pretty clearly, false: Jane is fully unconscious and unable to feel pain on Monday. On Monday, a sadistic doctor injects her with a drug that, when Jane awakes on Tuesday, will cause her great pain. In order for Jane to be wronged by this act, she must be capable of feeling pain. But when the injection is made (Monday) she is not able to feel pain. And yet it’s hard to deny that Jane is wronged by the injection. (Perhaps we should say she is wronged on Monday but suffers the wrong on Tuesday?)

Yet if that assumption is rejected, then it seems open to Kantians to argue that despite children not having humanity, they can never nevertheless be wronged by choices that will subsequently treat their humanity merely as a means. A child born so as to make her parents happy is wronged though perhaps not at the very moment she is born.

(There is one final wrinkle here: Does existing matter to whether one is wronged? Jane exists throughout the duration of Monday and Tuesday. A not-yet-conceived child does not exist. Is it possible for a wrong to be done by conceiving her that she only suffers later on?)

In any event, this isn’t a simple question for Kantian ethics to handle, but at least this response may help in discerning how Kantians might analyze it.

I the Koran subject to interpretation or to be taken literally?

I'm curious why you raise this question only with respect to the Koran--and not with respect to other sacred literatures (or perhaps you have them all in mind). I'm no expert on the Koran but I am pretty sure that, first, your question has a false dichotomy: "interpretation" is a matter of determining the meaning of a text (or of a speaker), and sometimes the meaning you settle on is what might be called a literal one, so interpretation CAN itself be literal in nature. Presumably what you have in mind then is a different contrast--between metaphorical or symbolical interpretation v literal interpretation. But even there I would imagine (said without claim of expertise) that the Koran is filled with much symbolic/metaphorical language, not least because ordinary (non-sacred) speech is itself filled with such; it's rather hard to imagine a text in which every single sentence is possessed (or meant to possess) only literal meaning. THAT said, perhaps your question is actually a little different, something on the order of, "Are passages in the Koran legitimately subject to divergent interpretations, some more literal, some less literal, etc.?" And here again as a non-expert I would imagine that that is a question itself under debate by those whose lives are devoted to interpreting the text--that the many different branches/sects/denominations of Islam (like the different branches of all major world religions) may well be generated by divergent interpretations of the same text. To determine whether (all) those divergent interpretations are equally legitimate is thus to determine just which branches/denominations are the legitimate ones--a question that probably shouldn't be left to the philosophers likely reading this website, but to specialists in Islam/Koran etc. (The last thing I'll offer, as a non-expert, is that to be sure any text is subject to multiple interpretations, and sacred texts seem to be particularly rich in multiple interpretability--but what you're asking for in effect are criteria for determining which of the many possible interpretations are "legitimate" or "correct"--a question that probably cannot be answered in the abstract.)

hope that helps!
Andrew

Does one have to be aware that one is exercising one's free will, in order to have free will?

Hard to see why, in my opinion. If (say) a free action is one that you undertake such that, at the moment of acting, it was at least logically, and perhaps even physically, possible that you either perform that action or not perform that action, those facts themselves at least seem to be independent of what your awareness is. What would be interesting is an argument that shows that only IF one is aware of the facts just described could those facts obtain ... but at the moment I don't see how to generate such an argument. Perhaps in the mix here is the thought in the other direction, a kind of old-fashioned argument for free will, that states that if one believes one is acting freely then one IS acting freely--or using our conscious experience of (or as of) acting in a way in which it seems to us that multiple options are logically and perhaps physically available as a sufficient condition for acting freely. (Your question concerned whether such awareness was a necessary condition, but here it is offered as a sufficient condition.) But that kind of strategy is very old fashioned, and doesn't seem very convincing to most people -- at best we may be aware of some/many of the causal factors that determine our behaviors/choices, but we hardly seem aware of the "alternate possibilities" that may or may not be open to us when we act--plus the fact that most people believe that there are many causal factors affecting our behavior of which we are not aware, so whatever we ARE aware of could hardly suffice to guarantee the freedom of our action .... so it seems to me that "awareness of alternate possibilities" is neither necessary nor sufficient for freedom ....

This entire answer presupposes a libertarian conception of freedom -- that freedom is/requires alternative possibilities. But you may get a very different sort of answer from compatibilists, who (perhaps) may be more inclined to give positive answers to both the necessary condition and sufficient condition version of your question ...

Is it morally acceptable to hate a crime but not the criminal?

I'm having a bit of trouble understanding why it wouldn't be. Among possible reasons why it would be just fine, here are a few.

1) People, and more generally sentient beings, occupy a very different place in the moral universe than mere things (including events and abstract ideas). Moral notions don't even get a grip unless they refer back one way or another to beings as opposed to things. There's simply no reason to think that our attitudes toward people should be in lock-step with out attitudes toward non-sentient things.

2) Moreover, you might think that hating people is almost always not a good thing. It makes it harder to see their humanity, it makes you more likely to treat them less fairly, it fills you up with emotional bile. Hating a crime might not be emotionally healthy either, but given the distinction you're interested in, it's not personal; it's strong moral disapproval of a certain kind of action, and that might be both appropriate and productive.

3) Suppose someone you care deeply about commits a crime that you disapprove of deeply. In spite of this, your care for the person doesn't just go away. It would seem morally very peculiar to say that because you strongly disapprove of what someone did, you should cultivate hatred of the person. On the contrary, one might think that if you succeed in making yourself hate the person you formerly loved, something good has gone from the world.

4) Someone might say that the real point here runs in the opposite direction. If you don't hate the person, then you shouldn't hate the crime. But that sounds at least as odd. Certain ways of behaving just are despicable—should be condemned in the strongest possible terms. But we don't treat the person who performs an action as one and the same with the action itself. Notice: I can (rightly!) be very angry with someone for behaving in a certain way. But everyone I know who's reached moral maturity knows what it means to be very angry with someone and yet not stop loving them. It's hard to see how that could be wrong.

When I read most discussions about free will, it seems that there is an implicit unspoken assumption that might not be accurate once it is brought forward and addressed explicitly. We know from research (and for me, from some personal experiences) that we make decisions before we are consciously aware that we have made that decision. The discussions about free will all seem to assume that one of the necessary conditions of free will is that we be aware that we are exercising it, in order to have it. (sorry if I did not phrase that very well). In other words, if we are not consciously aware that we are exercising free will in the moment that we are making a decision, then it is assumed that we do not have free will, merely because of that absence of conscious awareness. Suppose we do have free will, and we exercise it without being consciously aware that we are doing so at that particular moment. That might merely be an artifact that either we are using our awareness to do something that requires concentration. Only later do we then use our awareness to reflect on what we just did.

Part of the problem with this debate is that it's not always clear what's really at issue. Take the experiments in which subjects are asked to "freely" choose when to push a button and we discover that the movement began before the subject was aware of any urge to act. The conclusion is supposed to be that the movement was not in response to a conscious act of willing and so wasn't an act of free will. But the proper response seems to be "Who cares?" What's behind our worries about free will has more or less nothing to do with the situation of the subjects in Libet's experiment.

Think about someone who's trying to make up their mind about something serious—maybe whether to take a job or go to grad school. Suppose it's clear that the person is appropriately sensitive to reasons, able to reconsider in the light of relevant evidence and so on. There may not even be any clear moment we can point to and say that's when the decision was actually made. I'd guess that if most of us thought about it, we'd conclude that for many important decisions, we eventually just found ourselves thinking in a certain way at some point. We noticed or realized that our views had settled down. And yet it may be clear that in spite of this, conscious reason and reflection were part of the process out of which the decision eventually percolated and that we can offer reasons that we're willing to endorse for our decision. Put another way, it may be clear that our decision is one that an informed but disinterested third party would see as the fitting outcome of a process of reasonable deliberation.

It seems plausible to me that at least some of the time, what we decide fits this description. This emphatically includes the kinds of decisions where we might be most interested in whether something worth calling "free will" was at work. But cases like this are so different from what Libet's experiments studied that it seems bizarre to think of them as addressing the same concept. In any case, if our important decisions by and large fit this description, then for it's very unclear (to me at least) what more in the way of "free will" is left to care about.

Lately, I have been hearing many arguments of the form: A is better than B, therefore A should be more like B. This is despite B being considered the less desirable option (often by the one posing the argument). For example: The poor in our country have plenty of food and places to live. In other countries, the poor go hungry and have little to no shelter. It is then implied that the poor in our country should go hungry and have little to no shelter. I was thinking this was a fallacy of suppressed correlative, but that doesn't quite seem to fit. What is the error or fallacy in this form of argument? How might one refute such an argument?

Years ago, I used to teach informal reasoning. One of the things I came to realize was that my students and I were in much the same position when it came to names of fallacies: I'd get myself to memorize them during the term, but not long after, I'd forget most of the names, just as my students presumably did. Still, I think that in this case we can come up with a name that may even be helpful.

Start here: the conclusion is a complete non sequitur; it doesn't even remotely follow from the premises. How do we get from "The poor in some countries are worse off than the poor in our country" to "The poor in our country should be immiserated until they are as wretched as the poor in those other countries"?

Notice that the premise is a bald statement of fact, while the conclusion tells use what we ought to do about the fact. By and large, an "ought" doesn't simply follow from an "is", and so we have a classic "is/ought" fallacy. However, pointing this out isn't really enough. After all, in some cases the facts don't leave much moral room. To borrow a case from Peter Singer, if there's a child drowning in a shallow pond and I could easily rescue her, then I ought to—even if I'll get me boots wet in the process. As a matter of sheer logic, the "ought" doesn't follow from the facts about the child, but it's not hard to come up with a plausible premise that bridges the logical gap. Singer suggest this: if you could easily prevent a great misfortune for someone else at very little cost to yourself, you ought to. Add the obvious fact that in the case of the drowning child you could do that, and we get the conclusion.

Here's where we are so far: the argument you're describing is fallacious as a matter of sheer logic; if we need a name we can say it's an "is/ought" fallacy. But your opponent might say that you aren't really being fair. He might say that you're ignoring some obvious premise that—of course—he was simply taking for granted. The problem is that there's no such premise. After all, here's a premise that's actually plausible:

               If you could make some people worse off without making anyone better off, then you ought not to do so.

Offhand, I can't think of any serious moral position that would disagree. In fact, if a moral theory told us that this premise is wrong by and large (as opposed to wrong in some very special cases such as punishment, perhaps) that would be strong evidence against the theory. But the view you're describing seems to break down in this very way. Your opponent is saying that it would be morally better if the poor in this country were poorer than they already are, even if no one else's lot was improved.

We could add some curlicues, but that's probably enough to refute the argument you're asking about. However, I suspect that very few people really endorse that argument. My guess is that what's really at issue is something like this: poverty is relative. What we call "poor" in this country would amount to something close to wealth in some places. Improving the lot of the poor in this country, the argument would continue, is not a high priority, even if we can all agree that actively making them worse off is wrong.

I don't agree, but at least we're now in territory where there are glimmers of interesting issues. For example: some people think individuals should be charitable but that it's wrong for the government to take money from some of us to improve the lot of others. There are also people who think that some state-mandated redistribution is okay, but that there's a threshold beyond which it's wrong. Some of these people say that the poor in this country are mostly above that threshold. I'd guess that the people you're describing actually think something more like this.

We're now in the realm of serious issues. After all, if we set the threshold high enough, then we'll pretty much all agree that government isn't obliged to move people even higher. Many of us don't think the poor in this country are above any such threshold, but it's clear that reasonable people can disagree about how to draw the lines. On the other hand, these issues call for a lot more discussion, and so this is probably a good place to stop.

A lot of philosophy seems to be "philosophy of x" -- philosophy of science, philosophy of language, philosophy of mathematics, etc. Given this, should philosophy, institutionally speaking, be treated as a separate discipline at all? I mean, why couldn't the various philosophies of x be absorbed into the various types of x?

What you are offering is a philosophy of philosophy. From your principle that "philosophy of x" should be absorbed into the department of x, institutionally speaking, doesn't it follow that philosophy of philosophy should be absorbed into the department of philosophy? But it must be treated as "a separate discipline" for this to happen. Where else would you teach the philosophy of philosophy, metaphilosophy? In the department of physics perhaps?

Why did all the ancient philosophers seem so fascinated by astronomy? Their interest in math and "physics" is understandable, as math can be seen as very similar to certain branches of philosophy in that it is not the study of a particular existence, but, rather, the study of "existence," and physics is the study of the seemingly occult laws that govern everything, which is also very similar to philosophy in a sense, but astronomy is just the extrapolation of those two fields on "arbitrarily chosen" pieces of mass. Math, and even physics to a large extent, are "implicit" (for lack of better term) to existence, while astronomy is wholly explicit.

You are completely right to notice the early absorption with astronomy. I have heard people say “Greek philosophy began on May 28, 585 BC, at 6:13 in the evening” – because of astronomy. Thales, who is often called the first Greek philosopher, predicted a solar eclipse that we now know to have taken place on that date.

Not only did Thales thereby establish the credentials of philosophers as “ones who know” by being able to predict a coming natural event; he also thereby proved a point about the natural world that encapsulates early philosophy’s turn away from religion. If astonishing events like solar eclipses are not the capricious actions of mysterious gods but rather quite regular events in the natural world, then the world can be adequately studied through rational methods and without dependence on old stories handed down about divine action.

Mind you, this is only one way to understand the earliest philosophers. My point is that interest in astronomy is part of this picture and even one of the central concerns that those philosophers had, not at all something extraneous to their interests. We’d want to say somewhat different things about what astronomy meant to Anaximander and the Pythagoreans, because they had theories unlike anything ascribed to Thales.

Later in the ancient tradition, i.e. with Plato and Aristotle, astronomy came to take on additional significance. But rather than jumping ahead to them, let me back up to the state of astronomy before the philosophers came along.

First of all it’s highly relevant that astronomy was one of the first success stories in ancient science. The astronomy we encounter in e.g. the Babylonian records is almost entirely observational. People noted each night what phase the moon was in and which constellations were visible. Thales, who must have gotten his data from Babylon, was able to draw on their long history of watching the night sky. Astronomy let the earliest agricultural civilizations organize their calendars, not only observing that weather had begun to cool off in the fall but knowing when to expect it to cool; therefore knowing when the best time to plant and harvest would be. The first calendars began with the observation of recurring patterns in the skies, along with the special problem of coordinating the lunar calendar with a solar year.

It should be obvious why astronomy provides the basis for a calendar. The patterns of the sun, moon, and stars not only fall into patterns, once you start observing them long enough, but are also quite independent of anything that happens on earth. Earthquakes, floods, and fires – to say nothing of merely human events like war, drought, and migration – have no discernible effect on what we see in the stars. Measuring time calls for something that changes in a quantifiable way without being changed by the events one is using the calendar to measure. Nothing else accessible to those ancient civilizations could work as the observable sky could.

This fact also guides us to the second feature of astronomy that would have appealed to philosophers, in addition to the regularity that made the world feel natural and subject to human knowledge. Studying astronomy seemed to bring human beings into contact with relations and events that didn’t mix with lesser natural processes.

We know too much today to think this way. We know that the stars are made of matter like the matter found on earth, and that the observable patterns in our night sky are only the accidental effect of where our little sun is riding around a non-central part of an unimpressive galaxy. But ancient observers who knew none of that perceived what they saw in the sky as close to what we’d call a priori truth.

Ultimately my reply is to reject your assumption about the relative status of mathematics, physics, and astronomy. Astronomy struck a philosopher like Plato as much closer to mathematical truth than anything in the subject that Aristotle called “physics.” It mattered to philosophers (for Plato in particular) precisely because of how close it came to being mathematics.

Plato does distinguish the two subjects, though. In Book 7 of the Republic he has Socrates tell Glaucon that even what we see in the starry sky is visible and hence to some degree subject to the failings of all material objects. Astronomy comes closest of all the sciences to giving us patterns of the abstract truths about geometry and motion, but it still isn’t mathematics. Plato would conclude that although philosophers need to study astronomy as they progress toward higher kinds of knowledge, it is not their final object of inquiry.

Humans can apparently commit to beliefs that are ultimately contradictory or incompatible. For instance, the one person, unless they're shown a reason to think otherwise, could believe that both quantum mechanics and relativity correspond to reality. What I wanted to ask is -- the ability to hold contradictory beliefs might sometimes be an advantage; for instance, both lines of inquiry could be pursued simultaneously. Is this an advantage that only organic brains have? Is there any good reason a computer couldn't be designed to hold, and act on, contradictory beliefs?

Fantastic question. Just a brief reply (and only one mode of several possible replies). Suppose you take away the word "belief" from your question. That we can "hold" or "consider" contradictory thoughts or ideas is no big deal -- after all, whenever you decide which of multiple mutually exclusive beliefs to adopt, you continuously weigh all of them as you work your way to your decision. Having that capacity is all you really need to obtain (say) the specific benefit you mention (pursuing multiple lines of inquiry simultaneously). When does a "thought" become a "belief"? Well that's a super complicated question, particularly when you add in complicating factors such as the ability to believe "subconsciously" or implicitly. On top of that let's throw in some intellectual humility, which might take the form (say) of (always? regularly? occasionally?) being willing to revisit your beliefs, reconsider them, consider new opposing arguments and objections. Plus the fact that we may easily change our minds as new evidence arises. That said, it seems to me, that in general there's not much incentive to determine exactly how/when a "thought" becomes a "belief." Maybe that happens when you "commit" (to use your word) to the thought in some strong sense, but then again, when/how does that occur? How often must you "declare" what you believe? So with these mitigating considerations in mind, I'd agree with you that yes, we easily hold and consider contradictory thoughts, there may well be advantages to doing so, (there may well also be disadvantages, worth thinking about), and though I can't say much about artificial intelligence/cognition, if computers can be designed to express thoughts in the first place, it's hard to imagine what would inhibit them from expressing contradicting thoughts ...

A couple of good primary sources relevant to your question: Daniel Dennett's book Consciousness Explained (he develops a theory wherein the brain expresses many different, often contradictory, thoughts simultaneously), and some work by Tamar Gendler, particularly a paper comparing "Beliefs" with "A-liefs" (where the latter are thoughts that don't quite rise to the level of beliefs) .... don't have the title handy, but you should be able to find it.

hope that helps--
ap

Pages