Hey Philosopher folk: Do you know of any viable or at least well-examined arguments ever proposed that conclude that one murder (or some equivalent malfeasance) is no better nor worse than 8 million murders? Or generally, that multiple instances of a wrongdoing have no greater or lesser value of any kind, apart from numerical? If not, could anyone conceive of a possible argument for this? Please note, I am not a serial killer or mass murderer, this question just arose in a debate about an unrelated topic.

Well, I'm glad to hear you are not a murderer. If you were, I would argue that it is worse to kill greater numbers of people like this: 1. If act or outcome A is morally wrong, then A x n (n number of As) is more morally wrong than A. [stronger version might say A x n is n times morally worse.] 2. Murder is morally wrong. 3. So, n murders are morally worse than one murder. [Or any greater number of murders is worse than just one, perhaps n times worse.] Like most good arguments, this one just puts things in a good form for us to be able to consider the premises. It sounds as if you (like me) accept premise 2. So, what justifies premise 1? The easiest way to justify it is if one is a utilitarian (or other consequentialist) who measures wrongness in terms of bad consequences or outcomes. So, if one murder causes X amount of bad consequences (e.g., suffering, loss of potential flourishing for victim, etc.), then n murders would cause (roughly) nX bad consequences. And it would be morally worse [n...

Hi, I'm struggling to understand free will. I've been told that either of the following scenarios is commensurable with free will: (1) an omniscient being that knows all future events; (2) a block universe where future and past in some sense coexist with the present. But if free will is commensurable with these scenarios, would it be merely epiphemomenal? Would free will play any causative role if either of these conditions was true?

Your question suggests that you are thinking that free will must be something (e.g., a causal power) that is not a part of the rest of the universe, that it must be something that is (1) outside of the universe about which the omniscient being has complete knowledge, or (2) outside of the events that occur within the 'block' universe (an Einsteinian universe where there is no passage of time and the laws of nature describe the relationships between all the 'tenseless' events). If you think of free will that way, then yes, it seems like it cannot get a causal toe-hold on a universe that is already 'set in stone' like the block universe or one whose details are all already known by a god (who might be imagined outside the universe surveying it all at once). Free will, whatever it is supposed to be on this picture, would be cut out of the process, bypassed, epiphenomenal. However, perhaps there is a better way to understand free will, one that neither has these consequences nor makes free will a...

Why do you think there are so few women philosophers? For example there are only four, or five, on this panel.

Great question, one that many people have been asking and trying to answer recently. Led by Morgan Thompson and with some other former MA students at Georgia State, we carried out an empirical investigation of why women are less likely to major in philosophy than men (about 1/3 for several decades, lowest among humanities). Without more majors, there will be too few grad students and then professors. There's a discussion of our work here: http://dailynous.com/2016/03/31/why-do-undergraduate-women-stop-studying-philosophy/ I hadn't noticed how few women philosophers are panelists here. I hope more will be invited and accept!

I'd like to eventually earn a PhD in philosophy. I'm currently choosing between undergraduate programs. Due to personal and complicated reasons I'm unable to even attempt to attend some of the more prestigious universities (Yale, Duke, etc.). I'm worried this will have a strong negative effect on my chances of graduate school, and after graduate school my chances of securing a career in academia (philosophy). I read somewhere that it's nearly impossible to publish in the best journals if you don't come from a big-name university. What is the reality of my situation?

You should try to go to an undergrad program within your constraints that has a strong philosophy department and ideally has had some success placing their majors in strong PhD programs. There are many such programs that are not "prestigious" in that they don't even have PhD programs. There are so many amazing philosophy PhDs produced now, that liberal arts and public universities have hired people who are producing great research and are 'plugged into' the profession such that their advice and letters can get you into graduate programs, assuming you do all the hard work of getting near 4.0 grades and writing excellent papers, at least one of which is strong enough to be a writing sample for applications to grad school. Whether or not you can find and attend such an undergrad program, you may also have the option of attending a strong MA program (like mine at Georgia State) which can give you further training in philosophy and situate you to get into strong PhD programs. It can also give you a good...

Hello, I am a seventeen-year old guy, and recently I've been having some philosophical questions that are really getting me down. There is objectively no answer to them, but I want to feel that I am not alone in asking these questions, or if anyone else has thoughts like these. (This is going to be long so brace yourselves!) Basically, at this stage in my life it feels that anything I do is completely pointless. Not in a suicidal or depressed way, but it just IS pointless - even if I blew up the world and everything on it, so what, that would just be the transfer of energy and breaking apart of atoms. It feels like everything we do in life is for the sole aim of keeping us alive. For example, if I cut my hand off, it wouldn't ACTUALLY hurt (as atoms don't have feelings), but it would just send a message to my brain that I have been wounded in some way, and my brain will make me feel a certain level of pain depending on how severe the injury is, because it could possibly be hindering my survival, and that...

I will just respond briefly, but first I want to assure you that you are not alone in your existential quandaries--many people face them, perhaps especially adolescents trying to find o make meaning in a difficult and confusing world, as well as philosophers who have been motivated to try to answer these questions for centuries (or to explain why they are not answerable or are being asked in the wrong way, etc.) I will suggest the latter sort of move, that while your angst is surely genuine, what seems to be motivating it may be a bit off base. You seem to be taking a sort of reductionistic view that suggests, if it's all just atoms in the void or energy in motion or neurons in the head, isn't it all meaningless. But always consider what you are contrasting your view with. Would it all be more meaningful if we were non-physical souls embodied in a material world? Why? You point out that "I AM my brain." Right. So, what you care about, love, and desire and find meaningful is, in some sense, based on...

Hi all, Don't know if anyone can answer but is it really possible to upload our consciousness onto a computer hardrive, and achieve Transcendence? I believe there was a movie with Johnny Depp, in the lead, that looked at this, but as I haven't seen it, I really don't know what treatment this topic got. Anyway I hope someone will take pity on me, and answer my question, because when it comes to Transcendence, its really the Elephant in the lounge room. Cheers Pasquale.

It's too bad that movie was kinda lame. But the idea is not. If one is a functionalist about mental states, including consciousness, then one believes that our mental states can be instantiated in any system that has the same functional roles as the functional states in our brain that instantiate (or are) our mental states. Functional roles are basically what the states do. What a clock does is keep time. A clock can be implemented by a digital device, a bunch of gears, or even sand or water set up in the right system (but would they be the same clock?). Functionalists think our desires, beliefs, sensations, emotions, pains, memories, etc. can be understood in terms of what they do--that is, the way they take input information, organize it, and interact with each other, to cause output mental states and behavior. Computers helped motivate this theory of mind, and if certain versions of it are right, then our mental states could be implemented in complex enough computer systems, presumably connected...

If technology advances to the point of recreating the world almost perfectly in a virtual reality (i.e. The Matrix), would it be morally acceptable to "move" into that world indefinitely? Let us assume there is a moral disparity between someone with/without family, friends, attachments moving into this virtual reality. Let us also assume there is no cost to sustain anyone's well-being in this distant future, either in this virtual world or the real world, such as rent or food. Perhaps in this virtual world there are new fun things to do, like flying freely, that in the real world one could not do. There is seemingly no catch to this, but is there a moral obligation to remain in the "real world" and do "real things?"

As usual, the answer will depend on your ethical theory. For instance, some forms of utilitarianism might require that you go into the Matrix if doing so would maximize happiness (e.g., because you'd be much happier, outweighing any unhappiness you might cause to people in the 'real world' by being hooked up to the machine). Indeed, Robert Nozick used his Experience Machine thought experiment (a prequel to The Matrix) to argue that there must be something wrong with utilitarianism precisely because he thought we would not (and should not) hook up to the machine, in which our happiness would not be based on real actions and accomplishments. (There's some interesting experimental work on whether and why people say they would or would not be hooked up.) For various reasons (not just utilitarian), I think everything depends on what you would be leaving behind and what you would be doing in the Matrix. I'm not sure what you meant when you wrote that we should "assume there is a moral disparity between...

I've recently been struggling with the idea of Fatalism, Determinism, Compatibilism, Libertarianism, etc., and from what I've been reading, the general consensus is compatibilism among most philosophers. If this is the case, then what sense is there in being proud of myself for anything good I do? Is there such thing as effort in my life, or am I just on an inevitable and programmed path? Truth is, I'm an artist. Online, I prefer images be sourced, so anyone who appreciates it enough can get to it easily, and credit goes to the artist. I like to believe that the drawings I make and images I create have something respectable behind them, effort, hard work, practice, time, determination, patience, fun.. but then this debate of Moral Responsibility comes up, and muddles me a bit. I've been experiencing alot of mental stuff for a while- and through all of this, philosophical questions, existential crises, all of it just comes and never stops. It's like there's always something for me to worry, or think too...

You should not let these thoughts get you in a rut or depress you (and if you're feeling depressed or suicidal, you should definitely get professional support to make sure the problem is not more serious than you think). Fatalism is not true if it's the idea that nothing we do makes a real difference to what happens--that what's fated is going to occur no matter what. Even if determinism is true (or false), what we decide and do makes a crucial difference to what happens in the future--if we had done something different, the future would be different. I'm a compatibilist, and you can see some of my answers at this website or short articles on my personal website to get more argument for why I think this (majority) view is the right one. But no position in the free will debate suggests that our efforts don't matter, that we are just programmed machines, or that everything is inevitable (in the fatalistic sense I mention above). Or none of them should. You sometimes hear scientific skeptics...

"Eating animals can't be bad because how do you know plants don't have feelings" is a common argument against vegans. Is that a good argument?

No. Many vegans (and vegetarians) aim to minimize unnecessary suffering and believe that eating animals causes unnecessary suffering. A crucial premise of this argument is that animals can suffer pain, discomfort, and possibly even more complex unpleasant thoughts or emotions. What is the evidence for that premise? It's a best explanation (or abductive) argument. We have good reasons, based on a wide range of scientific evidence from psychology and neuroscience, to think that complex nervous systems are required to experience suffering, and the mammals we eat (and probably the birds and perhaps the fish) have nervous systems that support these experiences. Plus the behavior of these animals suggests that they can feel pain and discomfort. Plants do not have nervous systems (or anything analogous) and they do not show the behavior associated with experiencing pain (or anything else). So, we have no reason to think they suffer while they live or when they are harvested. (Personally, I think...

What would aristoteles do to answer the trolley problem ? would he kill the 5 people or switch the tracks to kill only one ?

Great question, and one that is rarely discussed in the over-worked trolley problem literature, mainly because the cases are set up to illuminate a conflict between the utilitarian response that seems to suggest killing 1 to save 5 regardless of the means of doing so and the Kantian response that seems to allow switching the track to save 5 (with a mere side-effect of allowing 1 to die), while disallowing pushing 1 intentionally as a means of saving 5. But what would a virtue theorist like Aristotle, or the originator of the trolley problem Philippa Foot, say? Well, there's no simple answer since virtue theory is (intentionally) open-ended and detail-driven. It would say that right thing to do in each case is what a virtuous person would recognize as the right thing to do, given the specific details of the case. Personally, I think the virtuous person would say it is morally required to switch the track in that case and morally wrong to push someone to stop the trolley in the other case. In part,...

Pages