Recent Responses

What philosophical works have been dedicated to the topic of rational decision making, the adoption of values, or how people choose their purposes in life?

A slim, accessible book on part of this question (and only part!) is Decision Theory and Rationality by José Luis Bermúdez (Oxford University Press 2009). It requires little or no technical knowledge of decision theory, and shows how decision theory can't possibly be an exhaustive account or explication of rationality. Bermúdez makes a good case, in simplest terms, that rationality plays at least three key roles: the guidance of action (i.e. answering the question what counts as a rational solution to a decision problem), normative judgement (answering the question whether a decision problem was set up in a way that reflected the situation it is addressing), and explanation (answering the question how rational actors behave and why). He argues that no form of decision theory (there are lots, and he only explores a few of the more common ones) can perform all three of these roles, yet that if rationality has one of these three roles or dimensions, it has to have all three of them. So decision theory can't be the whole story.

But it sounds as if your question goes way beyond decision theory and rationality in the narrow sense. The question how people arrive at their values and purposes or should arrive at them in the first place (decision theory takes for granted that people already have settled and stable values) is not one that philosophers since the ancient world have had much to say about that's of any use. For the first couple of thousand years or so since their time, religion has been the main source of answers to that question. Since the Enlightenment (i.e. for the past quarter millenium or so) it's been explored mostly not by philosophers strictly speaking, but by literary types with philosophical inclinations such as Diderot, Goethe, Dostoevsky, Robert Musil, Marcel Proust, Thomas Mann and so on (not much in English since about George Eliot). Not that religions have given up (including now various brands of secular religion as well), and also there's a huge self-help literature on the subject, much of it religious or quasi-religious, and most of little or no value. If a philosopher (in the modern sense, i.e. an academic philosopher, not in the 18th-century French sense) tries to tell you she has an answer to this question, that is the point where you should stop listening and walk away.

If the basis of morality is evolutionary and species-specific (for instance, tit for tat behaviour proving reproductively successful for humans; cannibilism proving reproductively successful for arachnids), is it thereby delegitimised? After all, different environmental considerations could have favoured the development of different moral principles.

There's an ambiguity in the words "basis of morality." It might be about the natural history of morality, or it might be about its justification. The problem is that there's no good way to draw conclusions about one from the other. In particular, the history of morality doesn't tell us anything about whether our moral beliefs and practices are legitimate. Even more particularly, the question of how morality figured in reproductive success isn't a question about the correctness of moral conclusions.

Here's a comparison. When we consider a question and try to decide what the best answer is, we rely on a background practice of reasoning. That practice has a natural history. I'd even dare say that reproductive success is part of the story. But whether our reasoning has a natural history and whether a particular way of reasoning is correct are not the same. modus ponens (from "A" and "If A then B," conclude "B") is a correct principle of reasoning whatever the story of how we came to it. On the other hand affirming the consequent (concluding "A" from "B" and "If A then B") is invalid reasoning even if it turns out that often, in typical human circumstances, there's some sort of advantage to reasoning this way. (Reasoning heuristics can be invalid and yet still be useful rules of thumb, though don't bet on this one being a good example.)

I assume the larger point is obvious. We say that stealing is wrong, and there's presumably an origins story about how we came to that principle. But that doesn't give us a reason to doubt that stealing really is wrong.

Not just the same point, but still relevant. There's no such thing as spider morality. Spiders don't subscribe to a code of cannibalism; they just (sometimes) eat their mothers. (BTW: rabbits sometimes eat their young. Happy Easter!) The reason we don't talk about spider mortality is that spiders can't step back and ponder whether it's really okay to eat Momma, but we can. Even if eating your mother might be in your reproductive interest, a little thought should suggest that it's probably still not okay.*

The causal basis of a belief is one thing; the soundness of the belief is another. For some reason, this point often seems less obvious for morality than arithmetic, but holds all the same. Knowing where a belief came from doesn't tell us whether it's true.

-------------
* Not under any circumstances? No. A bit of imagination will let you come up with Bizarro cases where eating dear Momma would be the best thing to do; details left as an exercise for the reader. But this actually reinforces the point. We can reason about what we ought to do. If we couldn't there'd be no such thing as morality. And the conclusions delivered by our reasoning won't always be what you'd expect if you simply looked to evolutionary or social history.

Is there any problem, moral or otherwise, in mixing money and enlightenment? For instance, asking people to pay spiritual guidance. Should philosophers receive a salary?

Even spiritual teachers have to eat. One might be suspicious of someone who withheld "enlightenment" unless the seeker paid, though in many traditions, support for spiritual guidance comes from voluntary donations.

Whatever one thinks about people who explicitly claim to be providing spiritual help, counselors and psychotherapists offer something that's at least in a ballpark not too many miles away. For instance: there are interesting similarities between what one might learn from Buddhist practice and from cognitive behavioral therapy. I, for one, would be puzzled if someone thought a therapist shouldn't charge for her services. Exactly how the lines get drawn here and what, if anything, underlies the difference is an interesting question. If gurus shouldn't be paid, should doctors? How about artists? After all, insofar as I count any of my own experiences as spiritual, some of the more profound ones came from paintings, works of literature, pieces of music.

In any case, I'd suggest caution about lumping philosophers together with spiritual teachers. Although there are some exceptions, most of what philosophers actually do isn't much like spiritual guidance at all. Here's a passage from a classic 20th century philosophical essay:

"Confusion of meaning with extension, in the case of general terms, is less common than confusion of meaning with naming in the case of singular terms." (W. V. O. Quine, 'Two Dogmas of Empiricism.')

I think the author would have been surprised if anyone had thought this was part of a piece of spiritual guidance. Perhaps he might have been less surprised that some people would be puzzled that he received a salary, but the world is full of wonders.

When a person, and especially a talented one, dies young, people sometimes mourn not just what they have in fact lost, but what might have been. But is mourning what might have been predicated on the belief that things could have been otherwise? And if someone is a thoroughgoing determinist and thinks that there's only one way things ever could have turned out, would it be irrational for such a person to mourn what might have been?

One way to interpret the mourner's state of mind is this: the mourner is thinking (optimistically) about the life the young person would have led had he/she not died young. That state of mind is consistent with believing that the young person's death was fully determined by the initial conditions of the universe in combination with the laws of nature.

The deterministic mourner might even recognize that, in mourning the young person's death, the mourner is committed to regretting that the Big Bang occurred just the way it did or that the laws of nature are just as they are: for only if the Big Bang or the laws of nature (or both) had been appropriately different would the young person not have died young. Furthermore, determinism allows that they could have been different. Determinism doesn't say that the initial conditions and the laws of nature are themselves causally determined; that would require causation to occur before any causation could occur.

Although the deterministic mourner's regret may sound odd, it doesn't strike me as irrational. The young person's early death is a painful but deterministic result of the laws of nature and the initial conditions of the universe -- and therefore one reason to regret that the laws and conditions were not appropriately different.

Is there a particular philosophical discipline that deals with large numbers of people doing something innocuous, but having a deleterious effect on a much smaller number of people? If so, does it have a name? Like blame-proration, guilt-apportionment, or anything? Thanks!

Perhaps an example would help, but I think I have the idea. We might want to start by modifying your description a bit. You wrote of large numbers of people doing something innocuous but having a bad effect on a small number of people. If you think about it, however, that means the word "innocuous" isn't really right. And so I'm guessing you have something like this in mind: there's a certain sort of action (call it X-ing) that large numbers of people perform that has something like this handful of features. First, it doesn't harm most people at all. Second, though X-ing is potentially harmful to some people, the harm would be minimal or maybe even non-existent if only a few people X-ed, and only occasionally. Third, however, enough people actually do X that it causes palpable harm to the small minority. And given your suggested terms ("blame-proration," "guilt-apportionment") I take your question to be about just how culpable the people who X actually are.

If that's right, it's a nice question. The broad discipline is ethics, though giving a name to the right subdivision is a bit tricky (especially for someone like me who's interested in the field but doesn't work in it). We're not up in the stratosphere of meta-ethics where people work if they're interested in whether there are objective moral truths, for instance. We're also not at the level of normative ethics where people propose and/or defend frameworks such as utilitarianism or virtue ethics. But we're also not as close to the ground as work in applied ethics. Your issue is theoretical, with both conceptual and normative components, and with potential implications for practical or applied ethics. There's actually a lot of work of this sort in the field of ethics. If I were going to tackle this sort of question, I'd start by assembling some examples to clarify the issues. I wouldn't restrict the examples to cases that are quite as focussed as the question you raise. For instance: I'd also look at cases in which the number of people potentially harmed may not be small, but where the individual actions, taken one at a time, don't seem objectionable. If I drive to work rather than taking the bus, I've increased my carbon footprint—not by a lot in absolute terms, and it's not as though there are no reasons for taking my car rather than the bus. (I'll get to my office earlier and may get more work done, for instance.) But if enough people drive rather than take the bus, the cumulative effect is significant. Your problem is even more specific, but you can see that there's an important connection. And the sort of problem I've identified is one that's been widely discussed. (It's a close cousin of what's sometimes called "the problem of the commons.")

So anyway: the broad discipline is ethics, and the question has both theoretical and practical elements. It's also an interesting issue to think about. Perhaps other panelists who actually work in ethics will have more to say.

1. Stella is a woman and she is mortal. 2. Joan is a woman and she is mortal. 3. Liz is a woman and she is mortal...etc How many instances of women being mortal do I need before I can come to the general conclusion that all women are mortal?

the short answer: you need as many instances as there are (or have been, or will be) women.

a longer answer: if what you're asking is how many instances do you need before it might be reasonable to infer that all women are mortal -- well there's no absolute answer to such a question (I would say). Partly it's about all such similar forms of reasoning -- in general, how many instances do you need in any inductive argument before it's reasonable to draw the general conclusion. Partly it's about the specific case -- what are the specific biological facts about womanhood (assuming that's a biological category) and mortality, which might govern how many instances are required before the general conclusion is reasonable. Partly it's a matter of social norms -- in the community you inhabit, how many instances will people require of you before they decide you are reasonable etc .....

the short answer has the benefit not merely of being correct but also being clear!

hope that helps ---
Andrew

Does a stereotype need to be largely false to be objectionable? Many people seem to think so, as when they respond to criticism of stereotypes by replying, "Some stereotypes exist for a reason."

"Largely false" is an interesting phrase -- and there are several different things one might mean by a stereotype, and it's being "true" or "somewhat/largely" true ... plus there are different sorts of "offenses" one may commit when using stereotypes -- but to be brief: Let's assume some stereotype is largely true, i.e. true of many/most of the members of the relevant category. One might still proceed objectionably when using that stereotype simply for assuming that what's true of many/most is in fact true of all. Indeed, we sometimes say that fail to treat an individual with appropriate respect when you simply classify that individual as a member of some category and are disinterested in the particular details that might characterize that individual. So even if the stereotype is true of that individual, it may still be wrong to ASSUME it is true of that individual; and all the more so if it turns out the stereotype is not true of that individual. So a short answer to your excellent question is no: even "largely true" stereotypes might be objectionable.

Now there are all sorts of ways to start qualifying this -- but I'll leave it at that.

hope that helps...
Andrew

What is the difference between marital relationship and a committed relationship in all aspects, except the legal bond?..is there really a difference?

The difference is exactly that marriage is a legal bond, and it involves certain obligations and requirements (for example those having to do with property) that may not be implied by the "committed relationship". It is as a result a more serious affair. There is also the historically related fact that marriage is often taken to have a religious dimension, which the committed relationship may or may not. What some people dislike about marriage is that in the past it has existed in a hierarchical setting, so that a priest or other official, at a particular moment, says the words, 'I pronounce you man and wife.' It may be that in a particular committed relationship there is such a moment, but it may also not be the case.

Is there a way to confirm a premises truth? When I looked it up I found two ways suggested. The first was the idea that a premise can be common sense, which I can't compartmentalize from the idea that appeals to consensus are considered a fallacy. The second was that it can be supported by inductive evidence, which to my knowledge can only be used to support claims of likelihood, not certainty.

The answer will vary with the sort of premise. For example: we confirm the truth of a mathematical claim in a very different way than we confirm the truth of a claim about the weather. Some things can be confirmed by straightforward observation (there's a computer in front of me). Some can be confirmed by calculation (for example, that 479x368=176,272). Depending on our purposes and the degree of certainty we need, some can be confirmed simply by looking things up. (That's how I know that Ludwig Wittgenstein was born in 1889.) Some call for more extensive investigation, possibly including the methods and techniques of some scientific discipline. The list goes on. It even includes things like appeal to consensus, when the consensus is of people who have relevant expertise. I'm not a climate scientist. I believe that humans are contributing to climate change because the consensus among experts is that it's true. But the word "expert" matters there. The fast that a group of my friends happen to think that something is true may not give me much reason at all to believe as they do.

We may want to pause over the word "confirm." If by "confirm," you mean "establish with certainty," we usually can't do that. If something isn't just a matter of meaning, math or logic, there's room to be wrong no matter how careful we are. Still, in many cases, there's not much room. Is it possible in some abstract sense of "possible" that Obama wasn't President in 2010? Yes. Is there room for a reasonable person to think he wasn't? Hard to see how.

This point bears on your question about "induction." Outside of math, logic, and meaning, what we know we know by experience---direct or indirect, ours or someone else's. In those cases, there's always room for doubt, and what we believe is more or less likely. There's no way around that; it's almost always possible that one or more of the premises of our arguments might be false. That's the price we pay for having knowledge about the world itself and not just, to use Hume's phrase, relations among ideas.

Summing up: there are lots of ways to confirm things, but which way is best depends on what we're trying to confirm. In most cases, "confirm" doesn't amount to "become certain." There are fallacious ways to argue for a premise, but reasonable ways of confirming one's beliefs---consulting experts, for example---may be superficially like fallacious ways (asking a casual sample of my friends, for instance, when the subject is one they have no special knowledge about). There's no simple rulebook for knowledge, even though there's a great deal that we actually know.

Can we perceive the natural laws, which have shaped our ability to perceive?

I'm not sure I would use quite the verb "perceive" to describe our cognitive grasp of natural laws, but I don't see any reason why we can't discover at least some natural laws, including those that have shaped our ability to perceive (or discover). That is, I don't see any reason why a natural law's having shaped our ability to perceive should make that natural law especially hard for us to discover. It's not as if we should think of natural laws as having purposely shaped our ability to perceive in order to keep themselves hidden from us.

Pages