Recent Responses

When I read most discussions about free will, it seems that there is an implicit unspoken assumption that might not be accurate once it is brought forward and addressed explicitly. We know from research (and for me, from some personal experiences) that we make decisions before we are consciously aware that we have made that decision. The discussions about free will all seem to assume that one of the necessary conditions of free will is that we be aware that we are exercising it, in order to have it. (sorry if I did not phrase that very well). In other words, if we are not consciously aware that we are exercising free will in the moment that we are making a decision, then it is assumed that we do not have free will, merely because of that absence of conscious awareness. Suppose we do have free will, and we exercise it without being consciously aware that we are doing so at that particular moment. That might merely be an artifact that either we are using our awareness to do something that requires concentration. Only later do we then use our awareness to reflect on what we just did.

Part of the problem with this debate is that it's not always clear what's really at issue. Take the experiments in which subjects are asked to "freely" choose when to push a button and we discover that the movement began before the subject was aware of any urge to act. The conclusion is supposed to be that the movement was not in response to a conscious act of willing and so wasn't an act of free will. But the proper response seems to be "Who cares?" What's behind our worries about free will has more or less nothing to do with the situation of the subjects in Libet's experiment.

Think about someone who's trying to make up their mind about something serious—maybe whether to take a job or go to grad school. Suppose it's clear that the person is appropriately sensitive to reasons, able to reconsider in the light of relevant evidence and so on. There may not even be any clear moment we can point to and say that's when the decision was actually made. I'd guess that if most of us thought about it, we'd conclude that for many important decisions, we eventually just found ourselves thinking in a certain way at some point. We noticed or realized that our views had settled down. And yet it may be clear that in spite of this, conscious reason and reflection were part of the process out of which the decision eventually percolated and that we can offer reasons that we're willing to endorse for our decision. Put another way, it may be clear that our decision is one that an informed but disinterested third party would see as the fitting outcome of a process of reasonable deliberation.

It seems plausible to me that at least some of the time, what we decide fits this description. This emphatically includes the kinds of decisions where we might be most interested in whether something worth calling "free will" was at work. But cases like this are so different from what Libet's experiments studied that it seems bizarre to think of them as addressing the same concept. In any case, if our important decisions by and large fit this description, then for it's very unclear (to me at least) what more in the way of "free will" is left to care about.

Lately, I have been hearing many arguments of the form: A is better than B, therefore A should be more like B. This is despite B being considered the less desirable option (often by the one posing the argument). For example: The poor in our country have plenty of food and places to live. In other countries, the poor go hungry and have little to no shelter. It is then implied that the poor in our country should go hungry and have little to no shelter. I was thinking this was a fallacy of suppressed correlative, but that doesn't quite seem to fit. What is the error or fallacy in this form of argument? How might one refute such an argument?

Years ago, I used to teach informal reasoning. One of the things I came to realize was that my students and I were in much the same position when it came to names of fallacies: I'd get myself to memorize them during the term, but not long after, I'd forget most of the names, just as my students presumably did. Still, I think that in this case we can come up with a name that may even be helpful.

Start here: the conclusion is a complete non sequitur; it doesn't even remotely follow from the premises. How do we get from "The poor in some countries are worse off than the poor in our country" to "The poor in our country should be immiserated until they are as wretched as the poor in those other countries"?

Notice that the premise is a bald statement of fact, while the conclusion tells use what we ought to do about the fact. By and large, an "ought" doesn't simply follow from an "is", and so we have a classic "is/ought" fallacy. However, pointing this out isn't really enough. After all, in some cases the facts don't leave much moral room. To borrow a case from Peter Singer, if there's a child drowning in a shallow pond and I could easily rescue her, then I ought to—even if I'll get me boots wet in the process. As a matter of sheer logic, the "ought" doesn't follow from the facts about the child, but it's not hard to come up with a plausible premise that bridges the logical gap. Singer suggest this: if you could easily prevent a great misfortune for someone else at very little cost to yourself, you ought to. Add the obvious fact that in the case of the drowning child you could do that, and we get the conclusion.

Here's where we are so far: the argument you're describing is fallacious as a matter of sheer logic; if we need a name we can say it's an "is/ought" fallacy. But your opponent might say that you aren't really being fair. He might say that you're ignoring some obvious premise that—of course—he was simply taking for granted. The problem is that there's no such premise. After all, here's a premise that's actually plausible:

               If you could make some people worse off without making anyone better off, then you ought not to do so.

Offhand, I can't think of any serious moral position that would disagree. In fact, if a moral theory told us that this premise is wrong by and large (as opposed to wrong in some very special cases such as punishment, perhaps) that would be strong evidence against the theory. But the view you're describing seems to break down in this very way. Your opponent is saying that it would be morally better if the poor in this country were poorer than they already are, even if no one else's lot was improved.

We could add some curlicues, but that's probably enough to refute the argument you're asking about. However, I suspect that very few people really endorse that argument. My guess is that what's really at issue is something like this: poverty is relative. What we call "poor" in this country would amount to something close to wealth in some places. Improving the lot of the poor in this country, the argument would continue, is not a high priority, even if we can all agree that actively making them worse off is wrong.

I don't agree, but at least we're now in territory where there are glimmers of interesting issues. For example: some people think individuals should be charitable but that it's wrong for the government to take money from some of us to improve the lot of others. There are also people who think that some state-mandated redistribution is okay, but that there's a threshold beyond which it's wrong. Some of these people say that the poor in this country are mostly above that threshold. I'd guess that the people you're describing actually think something more like this.

We're now in the realm of serious issues. After all, if we set the threshold high enough, then we'll pretty much all agree that government isn't obliged to move people even higher. Many of us don't think the poor in this country are above any such threshold, but it's clear that reasonable people can disagree about how to draw the lines. On the other hand, these issues call for a lot more discussion, and so this is probably a good place to stop.

A lot of philosophy seems to be "philosophy of x" -- philosophy of science, philosophy of language, philosophy of mathematics, etc. Given this, should philosophy, institutionally speaking, be treated as a separate discipline at all? I mean, why couldn't the various philosophies of x be absorbed into the various types of x?

What you are offering is a philosophy of philosophy. From your principle that "philosophy of x" should be absorbed into the department of x, institutionally speaking, doesn't it follow that philosophy of philosophy should be absorbed into the department of philosophy? But it must be treated as "a separate discipline" for this to happen. Where else would you teach the philosophy of philosophy, metaphilosophy? In the department of physics perhaps?

Why did all the ancient philosophers seem so fascinated by astronomy? Their interest in math and "physics" is understandable, as math can be seen as very similar to certain branches of philosophy in that it is not the study of a particular existence, but, rather, the study of "existence," and physics is the study of the seemingly occult laws that govern everything, which is also very similar to philosophy in a sense, but astronomy is just the extrapolation of those two fields on "arbitrarily chosen" pieces of mass. Math, and even physics to a large extent, are "implicit" (for lack of better term) to existence, while astronomy is wholly explicit.

You are completely right to notice the early absorption with astronomy. I have heard people say “Greek philosophy began on May 28, 585 BC, at 6:13 in the evening” – because of astronomy. Thales, who is often called the first Greek philosopher, predicted a solar eclipse that we now know to have taken place on that date.

Not only did Thales thereby establish the credentials of philosophers as “ones who know” by being able to predict a coming natural event; he also thereby proved a point about the natural world that encapsulates early philosophy’s turn away from religion. If astonishing events like solar eclipses are not the capricious actions of mysterious gods but rather quite regular events in the natural world, then the world can be adequately studied through rational methods and without dependence on old stories handed down about divine action.

Mind you, this is only one way to understand the earliest philosophers. My point is that interest in astronomy is part of this picture and even one of the central concerns that those philosophers had, not at all something extraneous to their interests. We’d want to say somewhat different things about what astronomy meant to Anaximander and the Pythagoreans, because they had theories unlike anything ascribed to Thales.

Later in the ancient tradition, i.e. with Plato and Aristotle, astronomy came to take on additional significance. But rather than jumping ahead to them, let me back up to the state of astronomy before the philosophers came along.

First of all it’s highly relevant that astronomy was one of the first success stories in ancient science. The astronomy we encounter in e.g. the Babylonian records is almost entirely observational. People noted each night what phase the moon was in and which constellations were visible. Thales, who must have gotten his data from Babylon, was able to draw on their long history of watching the night sky. Astronomy let the earliest agricultural civilizations organize their calendars, not only observing that weather had begun to cool off in the fall but knowing when to expect it to cool; therefore knowing when the best time to plant and harvest would be. The first calendars began with the observation of recurring patterns in the skies, along with the special problem of coordinating the lunar calendar with a solar year.

It should be obvious why astronomy provides the basis for a calendar. The patterns of the sun, moon, and stars not only fall into patterns, once you start observing them long enough, but are also quite independent of anything that happens on earth. Earthquakes, floods, and fires – to say nothing of merely human events like war, drought, and migration – have no discernible effect on what we see in the stars. Measuring time calls for something that changes in a quantifiable way without being changed by the events one is using the calendar to measure. Nothing else accessible to those ancient civilizations could work as the observable sky could.

This fact also guides us to the second feature of astronomy that would have appealed to philosophers, in addition to the regularity that made the world feel natural and subject to human knowledge. Studying astronomy seemed to bring human beings into contact with relations and events that didn’t mix with lesser natural processes.

We know too much today to think this way. We know that the stars are made of matter like the matter found on earth, and that the observable patterns in our night sky are only the accidental effect of where our little sun is riding around a non-central part of an unimpressive galaxy. But ancient observers who knew none of that perceived what they saw in the sky as close to what we’d call a priori truth.

Ultimately my reply is to reject your assumption about the relative status of mathematics, physics, and astronomy. Astronomy struck a philosopher like Plato as much closer to mathematical truth than anything in the subject that Aristotle called “physics.” It mattered to philosophers (for Plato in particular) precisely because of how close it came to being mathematics.

Plato does distinguish the two subjects, though. In Book 7 of the Republic he has Socrates tell Glaucon that even what we see in the starry sky is visible and hence to some degree subject to the failings of all material objects. Astronomy comes closest of all the sciences to giving us patterns of the abstract truths about geometry and motion, but it still isn’t mathematics. Plato would conclude that although philosophers need to study astronomy as they progress toward higher kinds of knowledge, it is not their final object of inquiry.

Humans can apparently commit to beliefs that are ultimately contradictory or incompatible. For instance, the one person, unless they're shown a reason to think otherwise, could believe that both quantum mechanics and relativity correspond to reality. What I wanted to ask is -- the ability to hold contradictory beliefs might sometimes be an advantage; for instance, both lines of inquiry could be pursued simultaneously. Is this an advantage that only organic brains have? Is there any good reason a computer couldn't be designed to hold, and act on, contradictory beliefs?

Fantastic question. Just a brief reply (and only one mode of several possible replies). Suppose you take away the word "belief" from your question. That we can "hold" or "consider" contradictory thoughts or ideas is no big deal -- after all, whenever you decide which of multiple mutually exclusive beliefs to adopt, you continuously weigh all of them as you work your way to your decision. Having that capacity is all you really need to obtain (say) the specific benefit you mention (pursuing multiple lines of inquiry simultaneously). When does a "thought" become a "belief"? Well that's a super complicated question, particularly when you add in complicating factors such as the ability to believe "subconsciously" or implicitly. On top of that let's throw in some intellectual humility, which might take the form (say) of (always? regularly? occasionally?) being willing to revisit your beliefs, reconsider them, consider new opposing arguments and objections. Plus the fact that we may easily change our minds as new evidence arises. That said, it seems to me, that in general there's not much incentive to determine exactly how/when a "thought" becomes a "belief." Maybe that happens when you "commit" (to use your word) to the thought in some strong sense, but then again, when/how does that occur? How often must you "declare" what you believe? So with these mitigating considerations in mind, I'd agree with you that yes, we easily hold and consider contradictory thoughts, there may well be advantages to doing so, (there may well also be disadvantages, worth thinking about), and though I can't say much about artificial intelligence/cognition, if computers can be designed to express thoughts in the first place, it's hard to imagine what would inhibit them from expressing contradicting thoughts ...

A couple of good primary sources relevant to your question: Daniel Dennett's book Consciousness Explained (he develops a theory wherein the brain expresses many different, often contradictory, thoughts simultaneously), and some work by Tamar Gendler, particularly a paper comparing "Beliefs" with "A-liefs" (where the latter are thoughts that don't quite rise to the level of beliefs) .... don't have the title handy, but you should be able to find it.

hope that helps--
ap

What philosophical works have been dedicated to the topic of rational decision making, the adoption of values, or how people choose their purposes in life?

A slim, accessible book on part of this question (and only part!) is Decision Theory and Rationality by José Luis Bermúdez (Oxford University Press 2009). It requires little or no technical knowledge of decision theory, and shows how decision theory can't possibly be an exhaustive account or explication of rationality. Bermúdez makes a good case, in simplest terms, that rationality plays at least three key roles: the guidance of action (i.e. answering the question what counts as a rational solution to a decision problem), normative judgement (answering the question whether a decision problem was set up in a way that reflected the situation it is addressing), and explanation (answering the question how rational actors behave and why). He argues that no form of decision theory (there are lots, and he only explores a few of the more common ones) can perform all three of these roles, yet that if rationality has one of these three roles or dimensions, it has to have all three of them. So decision theory can't be the whole story.

But it sounds as if your question goes way beyond decision theory and rationality in the narrow sense. The question how people arrive at their values and purposes or should arrive at them in the first place (decision theory takes for granted that people already have settled and stable values) is not one that philosophers since the ancient world have had much to say about that's of any use. For the first couple of thousand years or so since their time, religion has been the main source of answers to that question. Since the Enlightenment (i.e. for the past quarter millenium or so) it's been explored mostly not by philosophers strictly speaking, but by literary types with philosophical inclinations such as Diderot, Goethe, Dostoevsky, Robert Musil, Marcel Proust, Thomas Mann and so on (not much in English since about George Eliot). Not that religions have given up (including now various brands of secular religion as well), and also there's a huge self-help literature on the subject, much of it religious or quasi-religious, and most of little or no value. If a philosopher (in the modern sense, i.e. an academic philosopher, not in the 18th-century French sense) tries to tell you she has an answer to this question, that is the point where you should stop listening and walk away.

If the basis of morality is evolutionary and species-specific (for instance, tit for tat behaviour proving reproductively successful for humans; cannibilism proving reproductively successful for arachnids), is it thereby delegitimised? After all, different environmental considerations could have favoured the development of different moral principles.

There's an ambiguity in the words "basis of morality." It might be about the natural history of morality, or it might be about its justification. The problem is that there's no good way to draw conclusions about one from the other. In particular, the history of morality doesn't tell us anything about whether our moral beliefs and practices are legitimate. Even more particularly, the question of how morality figured in reproductive success isn't a question about the correctness of moral conclusions.

Here's a comparison. When we consider a question and try to decide what the best answer is, we rely on a background practice of reasoning. That practice has a natural history. I'd even dare say that reproductive success is part of the story. But whether our reasoning has a natural history and whether a particular way of reasoning is correct are not the same. modus ponens (from "A" and "If A then B," conclude "B") is a correct principle of reasoning whatever the story of how we came to it. On the other hand affirming the consequent (concluding "A" from "B" and "If A then B") is invalid reasoning even if it turns out that often, in typical human circumstances, there's some sort of advantage to reasoning this way. (Reasoning heuristics can be invalid and yet still be useful rules of thumb, though don't bet on this one being a good example.)

I assume the larger point is obvious. We say that stealing is wrong, and there's presumably an origins story about how we came to that principle. But that doesn't give us a reason to doubt that stealing really is wrong.

Not just the same point, but still relevant. There's no such thing as spider morality. Spiders don't subscribe to a code of cannibalism; they just (sometimes) eat their mothers. (BTW: rabbits sometimes eat their young. Happy Easter!) The reason we don't talk about spider mortality is that spiders can't step back and ponder whether it's really okay to eat Momma, but we can. Even if eating your mother might be in your reproductive interest, a little thought should suggest that it's probably still not okay.*

The causal basis of a belief is one thing; the soundness of the belief is another. For some reason, this point often seems less obvious for morality than arithmetic, but holds all the same. Knowing where a belief came from doesn't tell us whether it's true.

-------------
* Not under any circumstances? No. A bit of imagination will let you come up with Bizarro cases where eating dear Momma would be the best thing to do; details left as an exercise for the reader. But this actually reinforces the point. We can reason about what we ought to do. If we couldn't there'd be no such thing as morality. And the conclusions delivered by our reasoning won't always be what you'd expect if you simply looked to evolutionary or social history.

Is there any problem, moral or otherwise, in mixing money and enlightenment? For instance, asking people to pay spiritual guidance. Should philosophers receive a salary?

Even spiritual teachers have to eat. One might be suspicious of someone who withheld "enlightenment" unless the seeker paid, though in many traditions, support for spiritual guidance comes from voluntary donations.

Whatever one thinks about people who explicitly claim to be providing spiritual help, counselors and psychotherapists offer something that's at least in a ballpark not too many miles away. For instance: there are interesting similarities between what one might learn from Buddhist practice and from cognitive behavioral therapy. I, for one, would be puzzled if someone thought a therapist shouldn't charge for her services. Exactly how the lines get drawn here and what, if anything, underlies the difference is an interesting question. If gurus shouldn't be paid, should doctors? How about artists? After all, insofar as I count any of my own experiences as spiritual, some of the more profound ones came from paintings, works of literature, pieces of music.

In any case, I'd suggest caution about lumping philosophers together with spiritual teachers. Although there are some exceptions, most of what philosophers actually do isn't much like spiritual guidance at all. Here's a passage from a classic 20th century philosophical essay:

"Confusion of meaning with extension, in the case of general terms, is less common than confusion of meaning with naming in the case of singular terms." (W. V. O. Quine, 'Two Dogmas of Empiricism.')

I think the author would have been surprised if anyone had thought this was part of a piece of spiritual guidance. Perhaps he might have been less surprised that some people would be puzzled that he received a salary, but the world is full of wonders.

When a person, and especially a talented one, dies young, people sometimes mourn not just what they have in fact lost, but what might have been. But is mourning what might have been predicated on the belief that things could have been otherwise? And if someone is a thoroughgoing determinist and thinks that there's only one way things ever could have turned out, would it be irrational for such a person to mourn what might have been?

One way to interpret the mourner's state of mind is this: the mourner is thinking (optimistically) about the life the young person would have led had he/she not died young. That state of mind is consistent with believing that the young person's death was fully determined by the initial conditions of the universe in combination with the laws of nature.

The deterministic mourner might even recognize that, in mourning the young person's death, the mourner is committed to regretting that the Big Bang occurred just the way it did or that the laws of nature are just as they are: for only if the Big Bang or the laws of nature (or both) had been appropriately different would the young person not have died young. Furthermore, determinism allows that they could have been different. Determinism doesn't say that the initial conditions and the laws of nature are themselves causally determined; that would require causation to occur before any causation could occur.

Although the deterministic mourner's regret may sound odd, it doesn't strike me as irrational. The young person's early death is a painful but deterministic result of the laws of nature and the initial conditions of the universe -- and therefore one reason to regret that the laws and conditions were not appropriately different.

Is there a particular philosophical discipline that deals with large numbers of people doing something innocuous, but having a deleterious effect on a much smaller number of people? If so, does it have a name? Like blame-proration, guilt-apportionment, or anything? Thanks!

Perhaps an example would help, but I think I have the idea. We might want to start by modifying your description a bit. You wrote of large numbers of people doing something innocuous but having a bad effect on a small number of people. If you think about it, however, that means the word "innocuous" isn't really right. And so I'm guessing you have something like this in mind: there's a certain sort of action (call it X-ing) that large numbers of people perform that has something like this handful of features. First, it doesn't harm most people at all. Second, though X-ing is potentially harmful to some people, the harm would be minimal or maybe even non-existent if only a few people X-ed, and only occasionally. Third, however, enough people actually do X that it causes palpable harm to the small minority. And given your suggested terms ("blame-proration," "guilt-apportionment") I take your question to be about just how culpable the people who X actually are.

If that's right, it's a nice question. The broad discipline is ethics, though giving a name to the right subdivision is a bit tricky (especially for someone like me who's interested in the field but doesn't work in it). We're not up in the stratosphere of meta-ethics where people work if they're interested in whether there are objective moral truths, for instance. We're also not at the level of normative ethics where people propose and/or defend frameworks such as utilitarianism or virtue ethics. But we're also not as close to the ground as work in applied ethics. Your issue is theoretical, with both conceptual and normative components, and with potential implications for practical or applied ethics. There's actually a lot of work of this sort in the field of ethics. If I were going to tackle this sort of question, I'd start by assembling some examples to clarify the issues. I wouldn't restrict the examples to cases that are quite as focussed as the question you raise. For instance: I'd also look at cases in which the number of people potentially harmed may not be small, but where the individual actions, taken one at a time, don't seem objectionable. If I drive to work rather than taking the bus, I've increased my carbon footprint—not by a lot in absolute terms, and it's not as though there are no reasons for taking my car rather than the bus. (I'll get to my office earlier and may get more work done, for instance.) But if enough people drive rather than take the bus, the cumulative effect is significant. Your problem is even more specific, but you can see that there's an important connection. And the sort of problem I've identified is one that's been widely discussed. (It's a close cousin of what's sometimes called "the problem of the commons.")

So anyway: the broad discipline is ethics, and the question has both theoretical and practical elements. It's also an interesting issue to think about. Perhaps other panelists who actually work in ethics will have more to say.

Pages