The AskPhilosophers logo.

Philosophy

Could a big computer solve a philosophy problem?
Accepted:
September 6, 2008

Comments

Allen Stairs
September 11, 2008 (changed September 11, 2008) Permalink

Could a really, really smart person solve a philosophy problem? For example: could a really, really smart person "solve" the freewill problem?

Some really, really smart people already have — at least to their own satisfaction. Other really, really smart people aren't convinced. Is it just that at least some of these people aren't smart enough? Or is there something else going on here?

Let's consider a different example. The four-color conjecture says that using no more than four colors, any map drawn on a plane surface can be colored so that no two contiguous region have the same color. This conjecture was finally proved with the aid of a computer in the 1970s. Not all mathematicians agree that we really have a proof here, since no human being could ever check it. But we might at least say this: so long as the algorithm really was properly designed and the machine worked properly, we've got something of the same general logical sort as a mathematical proof, and we could have very good empirical reasons to think that all was well with the computation. Could a computer pull off a similar feat for the free will problem?

There are some reasons to think not. One is that the worries mathematicians had about whether the computer "proof" of the four color theorem is really a proof would come back in spades. What we look to philosophy to provide is a kind of insight or understanding that's graspable and intelligible to us. If the computer "solution" couldn't be followed, we'd be even more reluctant than the mathematicians to count it as a solution.

There's room to quarrel with that point, of course. Suppose we all agreed that the problem would be solved if only a very complicated argument in propositional logic was valid. And suppose that though figuring this out as well beyond any human's capacity, a big computer could manage. Even though we couldn't absorb the details, we could have very strong empirical reasons for thinking we had the answer. Though we couldn't grasp the details, we might be able to grasp the nature of the claim, and that might be good enough.

Except that this may overlook a characteristic feature of philosophical disagreement: things are controversial more or less all the way down. The free will problem is a good example. To be sure, some of the discussion raises intricate logical issues. But people on different sides of the dispute often disagree over large questions of value, intellectual and otherwise. In particular, with the free will problem, part of what will determine whether you see something as a "solution" will depend on what you care about: whether, for example, you have reason to care about the fact that in some situations and under some construals, you could or couldn't have done otherwise. The problem doesn't seem to be of the sort that brute force computation can help us with. It's an interpretive problem -- a problem of what to make of various facts and possibilities.

There's a related difficulty that a "computational" approach to philosophical problems risks running into. We want proposed solutions to philosophical problems to be "robust" in a certain sense. A "solution" to a philosophical problem that depends very sensitively on intricate logical detail is likely to be fragile: philosophical problems typically draw on clusters of concepts that are fuzzy around the edges and whose relative importance may be open to dispute. If the computer solution calls for an intricate regimentation of the logical territory, then the "solution" might end up dissolving if we think about the concepts a little differently or weight them in a subtly different way.

That's not inevitable, of course. After all, computers can sometimes help us see that solutions to certain kinds of problems are robust. Good scientific computer models are like that: monkey with the assumptions in a reasonable way, and you still end up with more or less the same result. (This, by the way, is one reason why you should be very suspicious of climate change skeptics who say "It's all just based on models.")

So it's by no means inconceivable that computers might have an important role to play in addressing philosophical problems. This goes for our example, the free will problem. Computer modeling might allow us to see vividly that certain sorts of "deterministic" systems are capable of a kind of subtlety that makes some of our worries about free will whither away. (Daniel Dennett has said many things that are relevant to this matter.) But I find it a little hard to see how a computer could simply "solve" a philosophical problem unless a good deal of old-fashioned philosophizing had already gone into the input. And as soon as we stepped into that background, of course, we could start all the shouting all over again.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/2320
© 2005-2025 AskPhilosophers.org