The AskPhilosophers logo.

Probability

As I understand it, inductive reasoning is considered by most a posteriori; yet I had learned about induction in a statistics class similar to the way someone would understand a clearly a priori mathematical theory. Assuming one would consider some conclusions based on induction, is it a priori or a posterori? John
Accepted:
July 21, 2007

Comments

Thomas Pogge
July 22, 2007 (changed July 22, 2007) Permalink

You should distinguish here between the inductive method of extrapolating from observed cases to as yet unobserved cases, on the one hand, and particular extrapolations derived by using this method, on the other hand.

Particular extrapolations are a posteriori. They depend on what has actually been observed.

The method, however, has certain a priori elements, esp. in the very “clean” and somewhat artificial stories you will have encountered in your statistics class. One such story might be this. You are faced with a large urn which you know contains many marbles all of which you know to be either white or red. On n occasions one marble was randomly selected from the urn, its color was recorded, and it was then mixed back in. Of these randomly selected marbles, 70 percent were white and 30 percent red. At the end of the story, you are then asked what we can learn from the random drawings about the color composition of the marbles in the urn.

In this sort of story, one can calculate precisely, given the result of the drawings, the probability of various color compositions in the urn. The probabilities will peak near the ratio observed in the drawings and will concentrate in predictable ways as the number of drawings increases. (If the 7:3 ratio holds up over 1000 drawings, for instance, the probability that the real ratio is under 6:4 becomes quite small in a way that can be calculated precisely.) This is the a priori element: The rational way of adjusting one’s expectations is guided -- even determined -- by probability calculations.

In the real world, however, induction is rarely so neat. Here we need to decide what predicates are useful for extrapolation, we must worry about observations not being independent of one another, we must guard against experimenter effects and biased (theory-guided) observations, and so on.

Consider, for instance, the task of designing and fine-tuning an algorithm for accepting or rejecting mortgage applications on the basis of past repayment experience. There are indefinitely many ways of collecting and coding information about applicants. The information provided may be influenced by the conduct of the bank staff, and its coding by the bank staff’s cognitive and other biases. There isn’t just one rational way of coping with all these complexities; though some banks clearly come up with more successful algorithms than others. (Even such ex post assessments of banks are not unproblematic, however, in that only acceptance errors, nor rejection errors, will come to light. We'll never know whether the Smiths, who were denied a mortgage, would have met their debt service obligations, had they received a mortgage.)

There are a priori elements, to be sure: We can know in advance that certain features will strengthen, or weaken, a method. (For instance, a good method should work so that, the more disproportionate is the default rate of applicants with a certain characteristic, the more weight this characteristic is given as a reason to deny an application.) But much else will depend on more or less lucky guesswork and imprecise “good judgment.”

  • Log in to post comments
Source URL: https://askphilosophers.org/question/1725
© 2005-2025 AskPhilosophers.org