The AskPhilosophers logo.

Ethics

Why do most philosophers assume that there is one manner of justifying ethics? Couldn't it be that some ethical principles or rules can be justified by a consequentialist approach, others by an evolutionary approach, still others by a deontological approach and some are just relative to specific cultures?
Accepted:
October 31, 2007

Comments

Douglas Burnham
November 1, 2007 (changed November 1, 2007) Permalink

What a fine question! A few responses suggest themselves.

1. It is not just a case of justifying ethics, as if ethics is standing around waiting for someone to justify it. Rather, it's a case of asking what ethics is in the first place. Thus Plato's famous insistence that he's not after an example but rather the Idea. There are of course many rules we hold ourselves to, or virtues we pursue, that we call 'ethical'. The philosopher will ask 'Is this really what is meant by ethics?' Or is it just a cultural mannerism, an arbitrary law, a convention?

2. Now, in reasoning to an answer to the 'what is ethics?' question, it might be (and often is) that the answer includes a universality criterion. That is to say, part of the meaning of the ethical is that any other way of thinking about the ethical is nonsensical, or is even unethical. This might be the uniqueness of the form of the Good in Plato, the essential characterisation of human beings as rational in Aristotle, or the universalisation of the concept of law in Kant.

3. If part of the purpose of ethical philosophy is to arrive at criteria by which judgements can be made, then having more than one form of justification would seem to work against this. What heppens when grant some territory to deontologists and some to utilitarians, and then discover an issue not clearly in either region? Are we willing to admit so easily that there are whole types of ethical problems that are essentially undecidable?

  • Log in to post comments

Allen Stairs
November 1, 2007 (changed November 1, 2007) Permalink

In many fields there are what some people call lumpers and splitters. Douglas has given a splendid answer that reflects a lumper/unifier/hedgehog perspective. Here's a rather different take, from a splitter/diversifier/fox point of view.

It's often held that ethical obligations trump all others. If something is right from a self-interested point of view, for example, but wrong ethically, then the ethical judgment wins. Another feature of ethical judgments (though not unique to them) is that ethical "oughts" satisfy a universalizability principle: if something is right for a person in a given set of circumstances, then it's right for anyone else in those same circumstances. The mere fact that that it's me rather than you is beside the point. If we accept these points, then we've said something unifying about the ethical, but it's only formal unity. It's consistent with very different views about what is actually right or wrong in particular cases. Your question reflects a suspicion that I share: there might be many potentially competing values, and it may be that no one substantive value is the "master value" to which all others must be reduced. Fairness might matter; happiness might matter; dignity might matter. And on it might go.

There's still room for a kind of unity here. We might turn to something called "multi-attribute utility theory." Roughly, we ask (i) what values are relevant? (ii) how much of each value would a particular decision create? and (iii) what are the relative WEIGHTS of the different dimensions of value? For example, suppose one action promises 6 points out of 10 on the Happiness scale, but only 2 out of 10 on the Justice scale, but the alternative promises 3 point on each scale. If the "trade-off weight" ranks Justice higher than Happiness by a large enough proportion (say, a factor of 4) the second action would come out as the right one.

Perhaps any ethical decision can be analyzed within this framework. If so, that would be a kind of unification, but it wouldn't tell us what the relative weights are. It MIGHT be that there are only a few relevant dimensions, and it MIGHT be that there are straightforward facts about the trade-off weights. If so, we have the Grand Unified Theory of Ethics. But on the other hand, it might be that the weights are highly sensitive to the details and the context in ways that make strong generalizations hard to come by.

Would that mean that some ethical problems are essentially undecidable? Perhaps. But we might still want to make a distinction. It may be that given the range of possible moral quandaries, there's no finite collection of rules or axioms that decides all possible cases in advance. Even if that were true, it could still be that there's a uniquely correct answer to every moral question. Compare: we can't spell out axioms that answer all questions about numbers (so Gödel taught us), even though we assume that all such questions have answers. And it could even be that when we're actually confronted with any particular set of particulars, we have the capacity sort them out morally. It would just be that we can't always do it by appeal to pre-specified axioms.

The reflections of the preceding paragraph are a bit fanciful, and in fact I'll confess to suspecting that not all ethical questions actually have uniquely correct answers. But confession is one thing, argument is another. And since this answer has gone on too long already, I will spare myself the embarrassment of revealing that I don't have strong arguments to back up my suspicion.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/1853?page=0
© 2005-2025 AskPhilosophers.org