The AskPhilosophers logo.

Ethics

Several of my friends are becoming increasingly enthusiastic about "objectivism," more specifically, they eschew altruism as something that should be considered "morally good" (regardless of whether or not there are any "truly altruistic"motivations in actuality). I'm inclined to take something of the opposite tack in regards to moral issues, however. I am wondering what ethical arguments could be made AGAINST a moral system that explicitly renounces any kind of self-interested motivation. That is, could the argument actually be made that a person is being immoral if, whenever faced with a decision that would benefit with either her own loss and another's gain or vice versa, she explicitly chooses to be altruistic, just because she believes that it is not fair to "privilege yourself" above others, and that the only way to avoid doing this is to only choose for the other person?
Accepted:
June 22, 2008

Comments

Louise Antony
July 4, 2008 (changed July 4, 2008) Permalink

Your question contains a false presupposition, viz., that "the only way to avoid [privileging yourself] is to only choose for the other person." If one wanted to be scrupulously impartial, one would have to treat all persons as having an equal moral claim on you. But you are yourself a person -- so you are among those who have a moral claim. So the best way to implement impartiality would be by doing a lottery in which you also hold a ticket; deciding, in other words, through some random procedure who should get the benefit of your moral concern in a given instance.

The point is important, and one that is stressed by both rules-theorists (or deontologists) like Kant, and consequentialists, like Mill. For Kant, it is the fact that human beings have the capacity for reason that makes them appropriate objects of moral concern. Since you have the capacity for reason, you are precisely as morally valuable as anyone else. You thus have duties to yourself, and it would be as much a kind of partiality to neglect these duties to yourself as it would be to neglect your duties to others. The first kind of neglect is not something that's usually a danger with people; that's probably why ethical theorists and religious leaders tend to focus on altruism -- as a kind of corrective to what they presume would otherwise happen.

Consequentialists, too, acknowledge parity between consequences that involve you and those that involve any other sentient creature. A hedonistic utilitarian -- someone who believes that the right action to perform is the one that will lead to the greatest total amount of happiness -- will insist that you assign the same measure of value to your happiness as you do to others. Consequentialism may be more likely to yield the result that you should almost never choose yourself over others, because it generally requires aggregating units of some good, like happiness, and it's relatively easy for the (e.g.) happiness of a group of others to outweigh the good for a single individual.

Many liberatory philosophies, like the Black Power movement, or feminism, focus on these duties to the self (whether or not they use the language of "duty") because the devaluation of oneself is one of the psychological results of being in an oppressed condition. But feminism has been particularly concerned with these issues because sexist oppression involves there being different sets of moral norms for men and for women, and an important norm that's differentially applied to women is self-sacrifice. (The philosopher Jean-Jacques Rousseau said it followed from women's nature that they ought to serve and obey men.) "Altruism" is only a virtue if it's expected equally of everyone, and only if its practice is consistent with a proper assessment of one's own worth.

A really good discussion of the issues you raise, including the issue of whether genuine altruism is possible, can be found in Robert Nozick's book, Anarchy, State and Utopia. Interestingly, both Nozick and Ayn Rand (the inventor of "objectivism") are often taken to be the philosophical founders of the political view called "libertarianism," which advocates for free economic markets and thoroughly permissive social policies. But Nozick, unlike Rand, believes that we both can and do morally value other people for their own sakes, and not just for the effects their fortunes have on our feelings. An adaptation of one of Nozick's own thought experiments makes the point.

In Anarchy, State, and Utopia, Nozick considers the question whether we actually value external things, or whether we only value the subjective experiences we get when we acquire or accomplish external things. To see, Nozick has us imagine the possibility of an "experience machine" [I'm sure this is discussed at other places on this site]. We tell the operator of the machine what we would like to have or accomplish ( "I'd like to write a Pulitizer-Prize-winning novel, go to bed with Brad Pitt, and bring peace to the Middle East"). The operator generates a script to be run by a virtual-reality generator. Once I enter the machine, I will be presented with exactly the sensory experiences I would have had if I had actually done these things -- and I'll forget the fact that I am in the machine, or that I specified this particular script. Nozick speculates that most of us, given the choice of entering the machine or living our (possibly less exciting and gratifying) real lives, will choose real life. That would show that what we value is not the subjective experiences success would bring us `, but rather the accomplishments themselves.

Now a modification of this thought-experiment can test whether we really value other people for themselves, or only for the effects their fortunes have on us. [Scholarly aside: I believe that this idea is due to John Rawls, but I've never been able to find it in print. If any of my co-panelists can confirm or disconfirm the authorship of this case, I'd be grateful. If no one can, I'm going to begin claiming that I made it up.] Here's the deal. An evil genius tells you that, whether you like it or not, she's going to hook you up to an experience machine. Moreover, she's going to take control of the future of some person who you love. But she's going to give you a choice: On option A, your loved one is guaranteed a future full of happiness and accomplishment. You, however, will be given experiences that will make you believe that your loved one is suffering relentless torment. On option B, your loved one really will be suffering relentless torment; on the other hand, you will be given experiences that make you think your loved one is happy and flourishing. Which do you pick? If you pick option B, then you really only value your loved one instrumentally; you only care about that person's happiness insofar as his or her happiness affects you. But if you chose option A, then you clearly value your loved one for their own sake.

Notice that this thought-experiment only tells you what your value system is like, and says nothing about what value system you ought to have. You were careful to make this distinction at the beginning of your question, which is great.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/2209
© 2005-2025 AskPhilosophers.org