Recent Responses

Does a stereotype need to be largely false to be objectionable? Many people seem to think so, as when they respond to criticism of stereotypes by replying, "Some stereotypes exist for a reason."

"Largely false" is an interesting phrase -- and there are several different things one might mean by a stereotype, and it's being "true" or "somewhat/largely" true ... plus there are different sorts of "offenses" one may commit when using stereotypes -- but to be brief: Let's assume some stereotype is largely true, i.e. true of many/most of the members of the relevant category. One might still proceed objectionably when using that stereotype simply for assuming that what's true of many/most is in fact true of all. Indeed, we sometimes say that fail to treat an individual with appropriate respect when you simply classify that individual as a member of some category and are disinterested in the particular details that might characterize that individual. So even if the stereotype is true of that individual, it may still be wrong to ASSUME it is true of that individual; and all the more so if it turns out the stereotype is not true of that individual. So a short answer to your excellent question is no: even "largely true" stereotypes might be objectionable.

Now there are all sorts of ways to start qualifying this -- but I'll leave it at that.

hope that helps...
Andrew

What is the difference between marital relationship and a committed relationship in all aspects, except the legal bond?..is there really a difference?

The difference is exactly that marriage is a legal bond, and it involves certain obligations and requirements (for example those having to do with property) that may not be implied by the "committed relationship". It is as a result a more serious affair. There is also the historically related fact that marriage is often taken to have a religious dimension, which the committed relationship may or may not. What some people dislike about marriage is that in the past it has existed in a hierarchical setting, so that a priest or other official, at a particular moment, says the words, 'I pronounce you man and wife.' It may be that in a particular committed relationship there is such a moment, but it may also not be the case.

Is there a way to confirm a premises truth? When I looked it up I found two ways suggested. The first was the idea that a premise can be common sense, which I can't compartmentalize from the idea that appeals to consensus are considered a fallacy. The second was that it can be supported by inductive evidence, which to my knowledge can only be used to support claims of likelihood, not certainty.

The answer will vary with the sort of premise. For example: we confirm the truth of a mathematical claim in a very different way than we confirm the truth of a claim about the weather. Some things can be confirmed by straightforward observation (there's a computer in front of me). Some can be confirmed by calculation (for example, that 479x368=176,272). Depending on our purposes and the degree of certainty we need, some can be confirmed simply by looking things up. (That's how I know that Ludwig Wittgenstein was born in 1889.) Some call for more extensive investigation, possibly including the methods and techniques of some scientific discipline. The list goes on. It even includes things like appeal to consensus, when the consensus is of people who have relevant expertise. I'm not a climate scientist. I believe that humans are contributing to climate change because the consensus among experts is that it's true. But the word "expert" matters there. The fast that a group of my friends happen to think that something is true may not give me much reason at all to believe as they do.

We may want to pause over the word "confirm." If by "confirm," you mean "establish with certainty," we usually can't do that. If something isn't just a matter of meaning, math or logic, there's room to be wrong no matter how careful we are. Still, in many cases, there's not much room. Is it possible in some abstract sense of "possible" that Obama wasn't President in 2010? Yes. Is there room for a reasonable person to think he wasn't? Hard to see how.

This point bears on your question about "induction." Outside of math, logic, and meaning, what we know we know by experience---direct or indirect, ours or someone else's. In those cases, there's always room for doubt, and what we believe is more or less likely. There's no way around that; it's almost always possible that one or more of the premises of our arguments might be false. That's the price we pay for having knowledge about the world itself and not just, to use Hume's phrase, relations among ideas.

Summing up: there are lots of ways to confirm things, but which way is best depends on what we're trying to confirm. In most cases, "confirm" doesn't amount to "become certain." There are fallacious ways to argue for a premise, but reasonable ways of confirming one's beliefs---consulting experts, for example---may be superficially like fallacious ways (asking a casual sample of my friends, for instance, when the subject is one they have no special knowledge about). There's no simple rulebook for knowledge, even though there's a great deal that we actually know.

Can we perceive the natural laws, which have shaped our ability to perceive?

I'm not sure I would use quite the verb "perceive" to describe our cognitive grasp of natural laws, but I don't see any reason why we can't discover at least some natural laws, including those that have shaped our ability to perceive (or discover). That is, I don't see any reason why a natural law's having shaped our ability to perceive should make that natural law especially hard for us to discover. It's not as if we should think of natural laws as having purposely shaped our ability to perceive in order to keep themselves hidden from us.

Skeptical theism states that if we cannot tell whether any of the evils in our world are gratuitous, then we cannot appeal to the existence of gratuitous evil to conclude that God does not exist. However, I can't help but think that we can. The rules of probability tell us that that individual probabilities can be quite low, but their disjunction can be very high. For instance, there may be only a small chance that you will be involved in an automobile accident on a given day, but if you drive every day, the chances are pretty good that you will be in one on some day in your lifetime. Similarly, even if the chance that a given instance of a trillion cases of suffering is gratuitous is quite low, the chance that one of that trillion is gratuitous can be can be very high, and it only takes one instance of gratuitous evil to rule out the existence of God. Coming from someone who is not a philosophy major, am I right in my criticism of skeptical theism or is it too naive?

The theism part of skeptical theism, at least if it's classical theism, must say that the probability that God allows suffering without having an adequate moral justification for allowing it is well-defined and zero, just as you suspect.

But the skeptical part of skeptical theism, as I understand it, says that we can't properly assign any probability at all to the claim that a given case of suffering is in fact gratuitous (i.e., such that God, if God exists, has no adequate moral justification for allowing it). We can't, according to the skeptical part, because we can't presume to know the full range of justifications at God's disposal, if God exists. So we have to enter a "?" rather than a number (or range of numbers) into our calculation of the probability of the disjunction, which of course renders the calculation impossible.

I don't mean to suggest that I accept the skeptical part of skeptical theism, but that's what it says, if I understand it correctly.

Where does one draw the line between honoring the work of an earlier writer/scholar/artist and plagiarism or fraudulent re-use?

Surely intent to deceive has something to do with this. If I set out to use X's ideas in order to solve a problem, and I make it clear that is what I am doing, then that is honouring. If I don't make it clear that is what I am doing, nor could I reasonably expect that all my readers will know this is what I am doing, then that is plagiarism.

A slightly different version of your question would be this: "Where does one draw the line between honouring ... and merely rehashing old ideas?" (I love the word 'rehash', by the way, it being literally visceral.)

We are probably all tempted by the answer: "a work of philosophy (or art, or whatever) is not a rehash if it exhibits some amount of originality." So, suppose I use X's ideas (and I'm clear about what I am doing) to try to solve a problem that X did not consider, or to write a novel about a kind of situation that X did not. That is surely a sufficient degree of originality to avoid the accusation of rehashing.

But originality is not so easy a concept to define. As an educator, undergraduate students often try to produce original work, and end up inadvertently ... making a hash of things. Walking before running and all that. And yet, if the same student merely repeated back to me what some philosopher had said, that would also not be a good result. In this case, a sufficient level of originality is showing that one is able to understand, by putting into different words, explaining very clearly, using different examples, and so forth.

Such a level of originality might earn a very good mark on a undergraduate essay, but wouldn't get one published in a journal. So, again, originality functions differently.

A nice meditation on the concept of originality is the short story "Pierre Menard, author of the Quixote" by J.L.Borges. Look it up!

I really want to do a phd in philosophy and teach, but the society says I should not. I am 19 , but have got to go back to high school to finish up . A long way to go. How do I motivate myself? How do I ignore my other and unimportant desires/distractions to become what I want and is most meaningfull to me?

I think it is great that you know what you want at the age of 19 - I certainly didn't. If the goal of achieving higher qualifications in philosophy is a genuine goal for you, then it will stay with you for the next ten years or so, by which time with a bit of luck you will have arrived at your destination. If it turns out to be a genuine goal for you, then at worst what you call distractions will just delay you a bit. In fact, the philosopher who lived his or her life without distractions is just a myth, and certainly not a standard against which the rest of us should be judged.

"Life is short" is often said but, in fact, for most people life is quite long, at least in the sense of presenting more opportunities than one imagines.

Are there any philosophers who argue that novel experiences in themselves are good things, or do philosophers generally class some experiences as good and others as bad?

This is a great question that invites a long, thorough answer, but alas I'll be brief. It's easy to recognize that things, events, experiences, have many different properties, and rather than try to evaluate the whole package and say that "x is a good thing," we can evaluate x along its many different aspects, properties, etc. So we could say that, in general, novelty (of experiences) is a good thing (for whatever reasons), while recognizing that not all novel experiences are "good things" overall -- after all, being tortured may be novel but few except masochists would say their new experience of being tortured is a good thing. Perhaps insofar as it is novel, it is good (b/c it's good to learn new things, have new experiences, etc); but insofar as it is terribly painful, it is bad; and in this example, since the badness of the painfulness outweighs the goodness of the novelty, the experience overall is bad -- even if novelty is, in general, a good thing ...

hope that's useful!

Andrew

Any comment the the fact that the expression "begs the question" is now used regularly in the U.S. media to mean "needs to be asked" rather that it's original meaning "Assumes the conclusion in the argument" ? Should Philosophers develop a new expression the capture the original meaning ? Thanks.

One of my biggest pet peeves, it drives me crazy! I don't know how feasible it is to develop new expressions etc., but we might consider this: when speaking to philosophers we can use the original latin term for the fallacy, petitio principi, and when speaking to the general public, use the term the way it's widely used. (When in Rome, speak as the Romans ....) This is painful to do for most philosophers, I imagine, but just slightly less painful than using the term properly and then either being widely misunderstood or taken by others to sound either arrogant or like an idiot .....

the worrisome thing is that so many who misuse the term in public discourse are educated, opinion-shapers, including journalists, politicians -- who (one would hope) might have taken some philosophy in college and should know better ..... but changing that practice (I think, sadly) is probably a losing proposition.

which begs the question: how should one use the expression 'begs the question'.... :-)

hope that's useful.

Andrew

Hi, I'm a biology student who often uses biology as a framework for understanding thought. I've come to a really tough crossroads of thought. What differentiates cognitive biases from logical fallacies?

The difference between the cognitive biases and the logical fallacies is that the biases can be taken to be common built-in tendencies to error of individual judgements, whereas the fallacies, both formal and non-formal (so-called "informal", badly named because "informal" actually means "casual" or "unofficial" or "relaxed") are types of argument. The point is that the biases can be said to have causes, and are hence of psychological but not logical interest, whereas the fallacies do not have causes (though the making of a fallacy on a particular occasion may have) and the reverse is true. There is more to be said, of course, because a psychologist might take an interest in the fallacies.

Pages