The AskPhilosophers logo.

Ethics
Animals

The probability in my mind that I am correct in attributing extensive moral personhood to other humans is very high. With non-human vertebrate, I attribute slightly less extensive but still quite broad moral personhood, and I am in this too quite confident. But I accept given I am a fallible human being I might be wrong and should give them no moral personhood or moral personhood of the kind I ascribe to humans. Continuing in the same line, I ascribe almost no moral personhood to bacteria and viruses. But again given I am fallible musnt I accept some non-zero probability that they deserve human like personhood? If so, and I am a utilitarian, given the extremely large number of bacteria and viruses on the planet it seems even if I am very sure that bacteria are of only minimal moral importance, I still must make serious concessions to them because it seems doubtful that my certainty is so high as to overcome the vast numbers of bacteria and viruses on this planet. (I am aware it is not entirely clear how best I could promote bacterial welfare but even so, it seems I simply cannot be as certain as I would want to be to disregard them as much as I would like). Am I missing something? Is there a solution to this problem?
Accepted:
May 27, 2015

Comments

It's a very interesting

Allen Stairs
June 8, 2015 (changed June 8, 2015) Permalink

It's a very interesting question. It's about what my colleague Dan Moller calls moral risk. And it's a problem not just for utilitarians. The general problem is this: I might have apparently good arguments for thinking it's okay to act in a certain way. But there may be arguments to the contrary—arguments that, if correct, show that I'd be doing something very wrong if I acted as my arguments suggest. Furthermore, it might be that the moral territory here is complex. Putting all that together, I have a reason to pause. If I simply follow my arguments, I'm taking a moral risk.

Now there may be costs of taking the risks seriously. The costs might be non-moral (say, monetary) or, depending on the case, there may be potential moral costs. There's no easy answer. Moller explores the issue at some length, using the case of abortion to focus the arguments. You might want to have a look at his paper HERE.

A final note: when we get to bacteria, I think the moral risks are low enough to be discounted. I can't even imagine what it would mean for bacteria to have the moral status of people or even of earthworms.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/24398?page=0
© 2005-2025 AskPhilosophers.org