Advanced Search

In an answer to a question about logic, Prof Maitzen says he is unaware of any evidence that shows classical logic fails in a real-life situation. Perhaps he has never heard of an example from physics that shows how classic logic does not work in certain restricted situations? A polarizing filter causes light waves that pass through it to align only in one direction (e.g., up-down or left-right). If you have an up-down filter, and then a left-right filter behind it, no light gets through. However, if you place a filter with a 45 degree orientation between the up-down and left-right filter, some light does get through. It seems to me that classic logic cannot explain this real-world result. Thanks!

I'm sure that Stephen Maitzen will have useful things to say, but I wanted to chime on in this one. You have just given a perfectly consistent description of what actually happens in a simple polarization experiment that I use most every semester as a teaching tool. Classical logic handles this case without breaking a sweat. But there's another point. You've described the phenomenon in terms of light waves. That's fine for many purposes, but note that the wave version of the story of this experiment comes from classical physics, where (for the most part at least) there's no hint of logical paradox. The classical explanation for the result is that a polarizing filter doesn't just respond to a property that the light possesses. It also changes the characteristics of the wave. Up-down polarized light won't pass a left-right filter, but if we put a diagonal filter between the two, the classical story is that the intermediate filter lets the diagonal component of the wave pass, and when it does, the light...

Given a particular conclusion, we can, normally, trace it back to the very basic premises that constitute it. The entire process of reaching such a conclusion(or stripping it to its basic constituents) is based on logic(reason). So, however primitive a premise may be, we don't seem to reach the "root" of a conclusion. Do you believe that goes on to show that we are not to ever acquire "pure knowledge"? That is, do you think there is a way around perceiving truths through a, so to say, prism of reasoning, in which case, nothing is to be trusted?

There's a lot going on here. You begin this way: Given a particular conclusion, we can, normally, trace it back to the very basic premises that constitute it. If by "conclusion" you mean a statement that we accept on the basis of explicit reasoning, then we can trace it back to the premises we reasoned from simply because we've supposed that there are such premises. On the other hand, most of what we believe doesn't come from explicit reasoning. (I don't reason to the conclusion that I had a burrito for lunch. I just remember what I ate.) And even when it does, the premises don't usually constitute the conclusion. The easiest way to see this is to consider non-deductive reasoning. A detective may conclude that Lefty was the culprit because a number of clues point in that direction. Maybe a witness saw someone who looks like him; maybe he had a particular motive for the crime. But the clues don't constitute Lefty being the criminal; they merely make it likely. After all, even given all the...

Hello! I have a question about a particular line of reasoning in a debate that, to me, only leads to a "do I care" conclusion. I have now encountered this reasoning in several debates and can't think of a better conclusion. There must be a name for this that I am not aware of. Most recently this happened in a debate about cults. We were chugging along on the topic of cults and what gets something labeled as a cult vs say a religion or a tribe or, more universally, just humanity. The conclusion, again to me, was that when you expand the definition of "cult" so far out, yes, the entire human race can be labeled a cult. That is to say that under that definition of the word "cult" everything can be labeled a cult and the only conclusion is "do I care". This did not help my friend who wishes to avoid all cults but seemingly proved they were in a cult called the human race. Is there a name for this type of semantic bloating? Is this perhaps a long established logical fallacy I'm not aware of?? Regards.

I don't know the name, though I like "semantic bloating." In any case, a couple of observations. First, words mean what people use them to mean. Words in English mean what competent speakers use them to mean—or, at least, that's close enough for our purposes. Competent speakers of English don't use the word "cult" to refer to the whole human race. But the issue isn't really about the word. If your friend has a point, s/he ought to be able to make it by setting the word "cult" aside. What bothers us about the things we typically label cults is that they display a cluster of undesirable traits and tendencies. They make a rigid distinction between insiders and outsiders; they enforce membership conditions that alienate members from family and friends who mean them no harm; they insist on accepting dubious beliefs; they make it psychologically distressing for people to challenge or doubt those beliefs; they expect unquestioning obedience to the group's authority figures. All of these things show up in...

Lately, I have been hearing many arguments of the form: A is better than B, therefore A should be more like B. This is despite B being considered the less desirable option (often by the one posing the argument). For example: The poor in our country have plenty of food and places to live. In other countries, the poor go hungry and have little to no shelter. It is then implied that the poor in our country should go hungry and have little to no shelter. I was thinking this was a fallacy of suppressed correlative, but that doesn't quite seem to fit. What is the error or fallacy in this form of argument? How might one refute such an argument?

Years ago, I used to teach informal reasoning. One of the things I came to realize was that my students and I were in much the same position when it came to names of fallacies: I'd get myself to memorize them during the term, but not long after, I'd forget most of the names, just as my students presumably did. Still, I think that in this case we can come up with a name that may even be helpful. Start here: the conclusion is a complete non sequitur ; it doesn't even remotely follow from the premises. How do we get from "The poor in some countries are worse off than the poor in our country" to "The poor in our country should be immiserated until they are as wretched as the poor in those other countries"? Notice that the premise is a bald statement of fact, while the conclusion tells use what we ought to do about the fact. By and large, an "ought" doesn't simply follow from an "is", and so we have a classic "is/ought" fallacy. However, pointing this out isn't really enough. After all, in some cases...

Is there a way to confirm a premises truth? When I looked it up I found two ways suggested. The first was the idea that a premise can be common sense, which I can't compartmentalize from the idea that appeals to consensus are considered a fallacy. The second was that it can be supported by inductive evidence, which to my knowledge can only be used to support claims of likelihood, not certainty.

The answer will vary with the sort of premise. For example: we confirm the truth of a mathematical claim in a very different way than we confirm the truth of a claim about the weather. Some things can be confirmed by straightforward observation (there's a computer in front of me). Some can be confirmed by calculation (for example, that 479x368=176,272). Depending on our purposes and the degree of certainty we need, some can be confirmed simply by looking things up. (That's how I know that Ludwig Wittgenstein was born in 1889.) Some call for more extensive investigation, possibly including the methods and techniques of some scientific discipline. The list goes on. It even includes things like appeal to consensus, when the consensus is of people who have relevant expertise. I'm not a climate scientist. I believe that humans are contributing to climate change because the consensus among experts is that it's true. But the word "expert" matters there. The fast that a group of my friends happen to think that...

Is this a decent argument (i.e. logical, sound)? If God exists, God is an omniscient, omnipotent, wholly good being If God is wholly good, God would want humans to posess free will If God is wholly good, God could endow humans with free will But, if any being is omniscient or all knowing, such a being would know human choices and actions before they are chosen Under such conditions, free will would only exist as an illusion or in the mind as the human perception of having free will; true free will would not exist because God or some other power has predecided all human choices Therefore, God, if God exists, cannot be both wholly good and omniscient Therefore, God does not exist

When we look at arguments, we have two broad questions in mind. One is whether the conclusion follows from the premises, whether or not the premises are true. The other is whether the premises are actually true. So with that in mind, let's turn to the argument. It's often possible by restating premises and adding other premises that are assumed but not stated to make an argument valid even if it's not valid as stated. Your argument is more or less this, I think If God exists, then necessarily God is perfectly good, knows all, and is all-powerful, Suppose God exists. Since God is all-powerful, God can give us free will. Since God is perfectly good, God wants us to have free will. God does anything God wants to do. Therefore, we have free will. Since God knows all, God knows what we are going to do before we do it. If God knows what we're going to do before we do it, then we don't have free will. Therefore, we don't have free will. CONTRADICTION. Therefore, God doesn't exist. We could clean things...

This is a question about pure logic. There are two theries: Theory A and Theory B. Theory A assumes AssumptionA. Theory B assume AssumptionB. The two assumptions are mutually exclusive: if AssumpionA then not AssumptionB and vice versa. I believe that a philosophical result is that Theory A and Theory B cannot prove anything about each other. All you can do is preface each result with the assumption. For example, if Theory A proves X and Theory B proves Y, then we can say "If AssumptionA, then X" and "If AssumptionB then Y". Who first proved this? Where is it documented? Eugene

I'm going to step through this carefully to make sure I follow. We have two theories: A and B . Theory A has an assumption: A and theory B has an assumption B . And A and B are mutually exclusive—can't both be true. Let's pause. To say that a theory has an assumption means that if the theory is true, then the assumption is true. It doesn't mean that if the assumption is true, then the theory is true. A silly example: the special theory of relativity assumes that objects can move in space. But from the assumption that objects can move in space, the special theory of relativity doesn't follow; you need a lot more than that. Otherwise, the "assumption" would be the real theory. You ask if it's true that neither theory can prove anything about the other. If I understand the question aright, it's not true. For one thing, trivially, if we take A as a premise, then by your own description, it follows that B is false. That seems like a case of proving something about B ...

My one distinguishing feature is that I don't have a distinguishing feature - paradox?

Fun question. Let's say that a characteristic or property or whatnot is intrinsic if we can tell whether someone has it without needing information about other people/things. The fact that I have blue eyes is an intrinsic feature in that sense. My eye color doesn't depend on your eye color. But to know that I'm the shortest person in the room, you have to know things about the other people in the room as well as things about me (namely, our heights.) Being the shortest person in the room isn't an intrinsic property/quality/characteristic. Note that we're using "property", "characteristic", "quality" so as to include abstract things, and things that depend in possibly quite recondite ways on how an individual is related to other individuals, sets of individuals... We don't tend to use the word feature so abstractly. Your features are the things we'd talk about to describe you yourself. Some of them, like height, may not be purely intrinsic, but to make things simple, we'll set those aside. If we...

I am reading "The Philosopher's Toolkit" by Baggily and Fosl, and in section 1.12 is the following: "As it turns out, all valid arguments can be restated as tautologies - that is, hypothetical statements in which the antecedent is the conjunction of the premises and the conclusion." My understanding is the truth table for a tautology must yield a value of true for ALL combinations of true and false of its variables. I don't understand how all valid arguments can be stated as a tautology. The requirement for validity is the conclusion MUST be true when all the premises are true. I must be missing something. Thanx - Charlie

I don't have Baggily and Fosl's book handy but if your quote is accurate, there's clearly a mistake—almost certainly a typo or proof-reading error. The tautology that goes with a valid argument is the hypothetical whose antecedent is the conjunction of the premises and whose consequent is the conclusion. Thus, if P, Q therefore R is valid, then (P & Q) → R is a tautology, or better, a truth of logic. So if the text reads as you say, good catch! You found an error. However, your question suggests that you're puzzled about how a valid argument could be stated as a tautology at all. So think about our example. Since we've assumed that the argument is valid, we've assumed that there's no row where the premises 'P' and 'Q' are true and the conclusion 'R' false. That means: in every row, either 'P & Q' is false or 'R' is true. (We've ruled out rows where 'P & Q' true and 'R' is false.) So the conditional '(P & Q) → R' is true in every row, and hence is a truth of logic.

What is the difference between "either A is true or A is false" and "either A is true or ~A is true?" I have an intuitive sense that they are two very different statements but I am having a hard time putting why they are different into words. Thank you.

I think you're getting at the difference between the principle of Bivalence (there are only two truth values—true and false) and the Law of Excluded Middle: 'P or not-P' is always true. Suppose there are some sentences that are neither true nor false. That might be because they are vague, for example. It might not be true to say that Smith is bald, but it might not be false either; it might be indeterminate. So if S stands for "Smith is bald," then "Either S is true or S is false" would not be correct. Our assumption is that S isn't true, but also isn't false. However, if by "not- S " we mean " S isn't true," then " S or not- S " is true. That is, bivalence would fail, but excluded middle wouldn't. But as you might imagine, there's a good deal of argument about the right thing to say here.

Pages