Advanced Search

Consider the mathematical number Pi. It is a number that extends numerically into infinity, it has no end and has no repeating pattern to its digits. Currently we have computers that can calculate Pi out to many thousands of digits but at a certain point we reach a limit. Beyond that limit those numbers are unknown and essentially do not exist until they are observed. With that in mind, my question is this, if we could create a more powerful computer that could continue to calculate Pi beyond the current limit, and we started at exactly the same time to compute Pi out beyond the current limit on two identical computers, would we observe the computers generating the same numbers in sequence. If this is the case would that not infer that reality is deterministic in that unobserved and unknown numbers only become “real” upon being observed and that if identical numbers are generated those numbers have been, somehow, predetermined. Alternatively, if our reality was non-deterministic would that not mean that...

You're no doubt right that any computers we happen to have available will only compute π to a finite number of digits, though as far as I know, there's nothing to stop a properly-designed computer from keeping up the calculation indefinitely (or until it wears out.) But you add this: "Beyond that limit those numbers are unknown and essentially do not exist until they are observed" Why is that? Let's suppose, for argument's sake, that we'll never build a computer that gets past the quadrillionth entry in the list of digits in π. Why would than mean that there's no fact of the matter about what the quadrillion-and-first digit is? What does a computer's having calculated it or (at least as puzzling) somebody having actually seen the answer have anything to do with whether there's a fact of the matter? To be a bit more concrete: the quadrillion-and-first digit in the decimal expansion of π is either 7 or it isn't. If it's 7, it's 7 whether anyone ever verifies that or not. If it's not 7, then it's...

When Bernie Sanders talks about healthcare being a "right", is he talking nonsense? If you consider any other right in the Bill of Rights (eg right to bear arms), it's about freedom from government interference. It's something I can hold against the government. But what Sanders wants seems to be the opposite of that. To pay for a healthcare system, you need to tax people. So, basically, a so-called right to healthcare really means an obligation on the government to interfere with my money. This so-called right would limit my freedom instead of protecting it!

You seem to be assuming that the idea of a positive right is nonsense, though perhaps you don't intend anything quite that strong. If that's what you do intend, then I'll leave it to others to say moe about the debate, but I'd simply note that it's not just obvious that only negative rights can be genuine rights. What I wanted to do instead is to highlight an assumption that lies behind your example and point out that it's open to question. It's the assumption that there's some antecedent fact of the matter about what's "your money." Like it or not, the money you earn (whether as salary or as an entrepreneur) comes to you within a system in which government is already deeply involved. There are courts and police. There are regulatory bodies that keep the banking system (for example) from turning into the wild west. There's a vast network of infrastructure that in fact is provided through the government. The list could clearly be extended. That background of rules, institutions, personnel, physical systems...

I am an undergraduate student who is interested in attending medical school. My primary reason for wanting to work in the medical field is to improve access to medical care in underserved further along my career path. However, attending medical school costs quite a bit. While I am fortunate enough to likely be able to pay for med school without crippling debt, I can't help but think that the money going towards my education could go towards better causes, such as improving infrastructure in rural, underserved communities and improving vaccination rates. Would the most moral option here be to donate money going towards my education to these causes or to go to medical school and use my education to improve access to healthcare in underserved populations?

Some people hold the view that if we're doing what we really ought to, we'll give up to the point where giving more would decrease the overall good that our giving produces. The most obvious arguments for that sort of view come from utilitarianism, according to which the right thing to do is the action that maximizes overall utility (good). If I could give more and overall utility would rise on that account, giving more is what I should do. Other views are less demanding. A Kantian would say that our most important duty is avoid acting in ways that treat others as mere means to our own ends. Kantians also think we have a duty to do some positive good, but how much and in what way is left open. I'm not aware of any Kantians who think we're obliged to give up to the point where it would begin to hurt. Who's right? I do think there's real wisdom in the idea that a system of morality won't work well if it's so demanding that few people will be able to follow it, and so I'm not persuaded by the point of...

Is it strange that you can't divide by zero?

It may seem strange at first blush, but there's a pretty good reason why division by 0 isn't defined: if it were, we'd get an inconsistency. You can find many discussions of this point with a bit of googling, but the idea is simple. Suppose x = y/z. Then we must have y = x*z That means that if y = 2, for example, and z = 0, we must have 2 = x*0 But if we multiply a number by 0, we get 0. That's part of what it is to be 0. So no matter what x we pick, we get x*0 = 0, not x*0 = 2. Is it still strange that we can't divide by 2? If by "strange" you mean "feels peculiar," then it's strange from at least some peoples' point of view. But this sense of "strange" isn't a very good guide to the truth. On the other hand, if by "strange" you mean "paradoxical" or something like that, it's not strange at all. On the contrary: we get paradox (or worse: outright contradiction) if we insist that division by zero is defined.

When asked to choose between two competing theories, A and B, each of which fits the facts, people will sometimes resort to asking questions like, "Which theory is the more probable?" or "Which theory is simpler?" or even "Which theory involves the least upset to all my other beliefs?" Well, what about, "Which is the less weird theory?" Could weirdness (that is, something like distance from everyday experience) count as a good criterion on which to endorse one theory over another? Einstein seems to be appealing to some idea like this in the comment that God doesn't play dice. And would it be fair to say that many philosophers appeal to something like this when they reject panpsychism?

Philosopher John Haugeland once offered a sort of counterpart to Ockham's razor: "Don't get weird beyond necessity." (from "Ontological Supervenience," 1984, Southern Journal of Philosophy pp. 1—12.) Of course, the hard part is spelling out what weirdness amounts to and why it counts against a hypothesis. For example: Ockham's Razor tells us not to multiply entities beyond necessity: it stands in favor of parsimonious theories. Panpsychism is certainly weird, but from one point of view it's parsimonious: it says that there aren't actually two kinds of physical things (conscious and unconscious) but only one. Does the weirdness swamp the parsimony? If so why? So as a quick and dirty rule of thumb, "Pick the less weird theory" seems fine. As a serious methodological rule, it may need some work.

What is the panel's response to the philosophic community's ad hominem attacks on Rebecca Tuvel and her article in Hypatia? There was no engagement of her ideas at all, and the editors of Hypatia were forced to remove her article and publish an apology, merely because Ms Tuvel asked uncomfortable questions.

I just wanted to clear up an important point. The article was not removed. It is still in the journal, including the online edition, and it will stay there. I'd suggest reading this piece http://www.chronicle.com/article/A-Journal-Article-Provoked-a/240021 which gives a clearer picture of the review process of the journal itself. In particular, it makes clear that the associate editorial board doesn't make decisions about what gets published, and isn't involved in the day-to-day operation of the journal. I will leave it to others to discuss more substantive issues.

Many people, like myself, think of Ayn Rand when we think of philosophy, having read her books when young, etc. Coming from this sort of background, it was surprising to me, recently, to be told that the majority of professional philosophers don't regard her as a philosopher at all, or, if they do, take little notice of her. Is that truly the attitude amongst philosophers? If so, is there any particular reason for it? For instance, is it to do with resistance to ideas that come from outside the university?

I don't know if most philosophers would say that she's no philosopher at all, but I suspect many would say she's a marginal philosopher. One reason is that however influential she may have been, many philosophers don't think she's a very good philosopher—not very careful or original or analytically deep—even if they happen to be broadly sympathetic to her views. The fact that she came from outside the academy by itself wouldn't be disqualifying, but in one sense, philosophers are not just people who engage with philosophical issues; they're people who are part of a community whose members read and respond to one another (even when they disagree deeply) and interact in a variety of particular ways. Being outside the academy tends to put you outside the ongoing conversation of that community. Whether that's good, bad, or neutral is another story, but to whatever extent "philosopher" means "someone who's a member of a certain intellectual community," the fact that she was outside the academy is part,...

It seems to me that there are two kinds of numbers: the kind that the concept of which we can grasp by imagining a case that instantiates the concept, and the kind that we cannot imagine. For example, we can grasp the concept of 1 by imagining one object. The same goes for 2, 3, 0.5 or 0, and pretty much all the most common numbers. But there is this second kind that we cannot imagine. For example, i (square root of -1) or '532,740,029'. It seems to me that nobody can really imagine what 532,740,029 objects or i object(you see, I don't even know whether I should put 'object' or 'objects' here or not because I don't know whether i is single or plural; I don't know what i is) are like. So, Q1) if I cannot imagine a case that instantiates concepts like '532,740,029', do I really know the concept, and if so, how do I know the concept? Q2) is there a fundamental difference between numbers whose instances I can imagine and those I cannot? (I lead towards there is no difference, but I don't know how to account...

I'd suggest that while there may be differences in how easy it is for us to "picture" or "imagine" different numbers, this isn't a difference in the numbers themselves; it's a rather variable fact about us. I can mentally picture 5 things with no trouble. If I try for ten, it's harder (I have to think of five pairs of things.) If I try for 100, it's pretty hopeless, though you might be better at it than me. But I'm pretty sure that there's no interesting mathematical difference behind that. I'm also pretty sure that I understand the number 100 quite well. I don't need to be able to imagine 100 things to be able to see that 2x2x5x5 is the prime factorization of 100, for example, nor to see that 100 is a perfect square. But that may still be misleading. I have no idea offhand whether 532,740,029 is prime. But I know what it would mean for it to be prime -- or not prime. And in fact, a bit of googling for the right calculators tells me that 532,740,029 = 43 x 1621 x 7643 I can't verify that by doing the...

Is it morally acceptable to hate a crime but not the criminal?

I'm having a bit of trouble understanding why it wouldn't be. Among possible reasons why it would be just fine, here are a few. 1) People, and more generally sentient beings, occupy a very different place in the moral universe than mere things (including events and abstract ideas). Moral notions don't even get a grip unless they refer back one way or another to beings as opposed to things. There's simply no reason to think that our attitudes toward people should be in lock-step with out attitudes toward non-sentient things. 2) Moreover, you might think that hating people is almost always not a good thing. It makes it harder to see their humanity, it makes you more likely to treat them less fairly, it fills you up with emotional bile. Hating a crime might not be emotionally healthy either, but given the distinction you're interested in, it's not personal; it's strong moral disapproval of a certain kind of action, and that might be both appropriate and productive. 3) Suppose someone you care...

When I read most discussions about free will, it seems that there is an implicit unspoken assumption that might not be accurate once it is brought forward and addressed explicitly. We know from research (and for me, from some personal experiences) that we make decisions before we are consciously aware that we have made that decision. The discussions about free will all seem to assume that one of the necessary conditions of free will is that we be aware that we are exercising it, in order to have it. (sorry if I did not phrase that very well). In other words, if we are not consciously aware that we are exercising free will in the moment that we are making a decision, then it is assumed that we do not have free will, merely because of that absence of conscious awareness. Suppose we do have free will, and we exercise it without being consciously aware that we are doing so at that particular moment. That might merely be an artifact that either we are using our awareness to do something that requires...

Part of the problem with this debate is that it's not always clear what's really at issue. Take the experiments in which subjects are asked to "freely" choose when to push a button and we discover that the movement began before the subject was aware of any urge to act. The conclusion is supposed to be that the movement was not in response to a conscious act of willing and so wasn't an act of free will. But the proper response seems to be "Who cares?" What's behind our worries about free will has more or less nothing to do with the situation of the subjects in Libet's experiment. Think about someone who's trying to make up their mind about something serious—maybe whether to take a job or go to grad school. Suppose it's clear that the person is appropriately sensitive to reasons, able to reconsider in the light of relevant evidence and so on. There may not even be any clear moment we can point to and say that's when the decision was actually made. I'd guess that if most of us thought about it, we'd...

Pages