Advanced Search

I am an undergraduate student who is interested in attending medical school. My primary reason for wanting to work in the medical field is to improve access to medical care in underserved further along my career path. However, attending medical school costs quite a bit. While I am fortunate enough to likely be able to pay for med school without crippling debt, I can't help but think that the money going towards my education could go towards better causes, such as improving infrastructure in rural, underserved communities and improving vaccination rates. Would the most moral option here be to donate money going towards my education to these causes or to go to medical school and use my education to improve access to healthcare in underserved populations?

Some people hold the view that if we're doing what we really ought to, we'll give up to the point where giving more would decrease the overall good that our giving produces. The most obvious arguments for that sort of view come from utilitarianism, according to which the right thing to do is the action that maximizes overall utility (good). If I could give more and overall utility would rise on that account, giving more is what I should do. Other views are less demanding. A Kantian would say that our most important duty is avoid acting in ways that treat others as mere means to our own ends. Kantians also think we have a duty to do some positive good, but how much and in what way is left open. I'm not aware of any Kantians who think we're obliged to give up to the point where it would begin to hurt. Who's right? I do think there's real wisdom in the idea that a system of morality won't work well if it's so demanding that few people will be able to follow it, and so I'm not persuaded by the point of...

Is it morally acceptable to hate a crime but not the criminal?

I'm having a bit of trouble understanding why it wouldn't be. Among possible reasons why it would be just fine, here are a few. 1) People, and more generally sentient beings, occupy a very different place in the moral universe than mere things (including events and abstract ideas). Moral notions don't even get a grip unless they refer back one way or another to beings as opposed to things. There's simply no reason to think that our attitudes toward people should be in lock-step with out attitudes toward non-sentient things. 2) Moreover, you might think that hating people is almost always not a good thing. It makes it harder to see their humanity, it makes you more likely to treat them less fairly, it fills you up with emotional bile. Hating a crime might not be emotionally healthy either, but given the distinction you're interested in, it's not personal; it's strong moral disapproval of a certain kind of action, and that might be both appropriate and productive. 3) Suppose someone you care...

If the basis of morality is evolutionary and species-specific (for instance, tit for tat behaviour proving reproductively successful for humans; cannibilism proving reproductively successful for arachnids), is it thereby delegitimised? After all, different environmental considerations could have favoured the development of different moral principles.

There's an ambiguity in the words "basis of morality." It might be about the natural history of morality, or it might be about its justification. The problem is that there's no good way to draw conclusions about one from the other. In particular, the history of morality doesn't tell us anything about whether our moral beliefs and practices are legitimate. Even more particularly, the question of how morality figured in reproductive success isn't a question about the correctness of moral conclusions. Here's a comparison. When we consider a question and try to decide what the best answer is, we rely on a background practice of reasoning. That practice has a natural history. I'd even dare say that reproductive success is part of the story. But whether our reasoning has a natural history and whether a particular way of reasoning is correct are not the same. modus ponens (from "A" and "If A then B," conclude "B") is a correct principle of reasoning whatever the story of how we came to it. On the other...

Is there any problem, moral or otherwise, in mixing money and enlightenment? For instance, asking people to pay spiritual guidance. Should philosophers receive a salary?

Even spiritual teachers have to eat. One might be suspicious of someone who withheld "enlightenment" unless the seeker paid, though in many traditions, support for spiritual guidance comes from voluntary donations. Whatever one thinks about people who explicitly claim to be providing spiritual help, counselors and psychotherapists offer something that's at least in a ballpark not too many miles away. For instance: there are interesting similarities between what one might learn from Buddhist practice and from cognitive behavioral therapy. I, for one, would be puzzled if someone thought a therapist shouldn't charge for her services. Exactly how the lines get drawn here and what, if anything, underlies the difference is an interesting question. If gurus shouldn't be paid, should doctors? How about artists? After all, insofar as I count any of my own experiences as spiritual, some of the more profound ones came from paintings, works of literature, pieces of music. In any case, I'd suggest caution about...

Is there a particular philosophical discipline that deals with large numbers of people doing something innocuous, but having a deleterious effect on a much smaller number of people? If so, does it have a name? Like blame-proration, guilt-apportionment, or anything? Thanks!

Perhaps an example would help, but I think I have the idea. We might want to start by modifying your description a bit. You wrote of large numbers of people doing something innocuous but having a bad effect on a small number of people. If you think about it, however, that means the word "innocuous" isn't really right. And so I'm guessing you have something like this in mind: there's a certain sort of action (call it X-ing ) that large numbers of people perform that has something like this handful of features. First, it doesn't harm most people at all. Second, though X-ing is potentially harmful to some people, the harm would be minimal or maybe even non-existent if only a few people X-ed, and only occasionally. Third, however, enough people actually do X that it causes palpable harm to the small minority. And given your suggested terms ("blame-proration," "guilt-apportionment") I take your question to be about just how culpable the people who X actually are. If that's right, it's a nice...

There are certain kinds of moral belief that we view in a pluralistic manner, and others that we take to be absolute. For an example of the former, suppose that I'm a vegetarian who believes that eating meat is immoral. Most people would say that it's inappropriate for me to harangue meat eaters, since they are just as entitled to their beliefs about diet as I am to mine. By contrast, we don't reason this way about things like murder. I am not obligated to respect the beliefs of someone who thinks murder is permissible--on the contrary, I may be morally remiss if I don't try to stop or correct him. What explains the difference between these two kinds of moral belief?

It's an interesting question. Some thoughts. Suppose Rufus believes that murder is morally acceptable. If I know of a murder he's trying to commit, then most of us agree that I'm not just allowed but even obliged to do various things to prevent it. (Telling the police would be the most obvious.) But if I have no reason to think that Rufus is planning to kill anyone, then while it's perfectly okay for me to try to argue him out of his view, most of us don't think it's okay to harass and harangue him about this admittedly despicable view. One reason for this is a matter of keeping civil peace; more on that below. Of course, there may be gradations here. Suppose it's not just that Rufus thinks it's okay to commit murder; suppose he makes a career of trying to convince other people. We'd still think there are limits to how far we can go in protesting, objecting and so on, but the limits would be fewer than they'd be if he were just some random weirdo who wasn't likely to act on his views and also wasn't...

Is it worse to break a promise in order to avoid telling a lie, or to tell a lie in order to keep a promise?

There's no all-purpose answer. Breaking some promises is worse than breaking others. Telling some lies is worse than telling others. But there's no good reason to think that every broken promise is worse than any lie or vice-versa. Telling some lies is worse than breaking some promises; breaking some promises is worse than telling some lies. If you really have to choose, the least bad choice will depend on the details.

People always say that one's action should not be aimed at disabling others to take their own actions, and the former is often subject of general denouncement. For example, when a pianist plays piano in his neighborhood at midnight and disturbs another person's sleep, people would say that playing piano is more of a disturbance than sleeping, and so one should avoid playing piano when someone else is sleeping. What is the intrinsic difference between the two? Cannot I say that the sleeping makes it inconvenient for the pianist to play piano, and so one should not sleep when someone else is playing piano? What is the logical basis of making any of such judgements?

There's no purely intrinsic reason, but there's still a reason overall. Here's a comparison. In the US, it's not just illegal but also wrong to drive on the left side of the road. In South Africa, the opposite is true. What makes it wrong to drive on the left in the US and the right in South Africa is that there is a widely-accepted practice --- in fact, a rule in this case --- about how we drive, and violating this particular practice puts others at risk. Now in the case of the late-night piano-player, there may be no literal risk created by the disturbance the piano player creates. But there's still what's sometimes called a coordination problem here, and there's a way of getting on that solves the problem. Most people sleep at night. Most people also need a reasonably quiet environment to sleep. And so we have a combination of custom and, in many jurisdictions, law to make it possible for people to do things like practice the piano and for people to sleep. Since most humans are wired to sleep at...

Do we have moral duties towards institutions (like the Red Cross)? Do institutions have moral rights?

I often find the words "duty" and "rights" confusing outside of legal contexts, because they're weighted with theoretical overtones that don't always help us think clearly about how we should act and what we should do. So let me refocus the question: are the things we should and should't do when it comes to institutions? I think the answer is yes. Suppose that I find a way to hack into the Red Cross bank accounts and steal money. I shouldn't do that. It's not just that it's against the law (though it certainly is). It's just wrong. It's not wrong just because it may hurt the CEO of the Red Cross, or any of the people who work for the Red Cross. Those people come and go, and it may even be that they aren't actually harmed by my act of theft. What I'm doing is wrong because (dare I say?) it harms the Red Cross itself. We could provide lots of related examples. And when it comes to the fundamental question, that's a pretty good way to answer it, I think. We can do things that help or harm organizations...

Dear philosophers, I've been told that instead of looking for objective moral facts, many philosophers see the task of ethics as bringing intuitions into "reflective equilibrium". But if intuitions aren't a sort of sixth sense that allows people to perceive moral facts, and are merely behavioural tendencies from nature and nurture, why ought we try to systematise them? What special authority do they have, and why duree action viagra should we care about them?

I think there may some some false dichotomies afoot here. Most of us think there are some first-order moral facts. For example: I may think (I do, actually) that torturing people just for fun is wrong. However, if I'm doing moral philosophy, I'm not trying to assemble a collection of first-order moral truths. I'm trying to present an account of (for instance) what makes things right or wrong. And so I offer some general view—for example, some version of utilitarianism, perhaps. But how do we decide whether my theory is correct? What counts as evidence? One important piece of evidence is whether my theory can account for uncontroversial cases. I think it's pretty uncontroversial that torturing people for fun is wrong. If my theory didn't entail this, that would be a serious piece of evidence against it. (Compare: if a scientific theory fails to account for some apparently unproblematic piece of experimental evidence, that's a strike against the theory.) Part of the process of arriving at reflective...

Pages