The AskPhilosophers logo.

Mind

Suppose a computer is trying to execute some code or another, but hasn't done so yet (for example, it is waiting for a given signal, or for a certain period of time to elapse). Does the computer intend to execute that code? Can we speak of intention in a case like this?
Accepted:
July 26, 2012

Comments

Louise Antony
August 2, 2012 (changed August 2, 2012) Permalink

You may not realize it, but you have presupposed the answer to your question in the way you asked it. You speak of the computer "trying" to execute a code. Trying involves intending to do something. So if you are not speaking metaphorically, you are presupposing that computers can have intentions, and that the computer in your case already has one. If the computer can really be said to be trying, then the additional detail in your example (viz., that there's a temporal gap between the computer's beginning to try, and the execution of the intended act) doesn't matter.

Now maybe you meant to be using the term "trying" loosely, or metaphorically, and then your question was whether the term "intention" could be strictly and literally applied to a computer. That's a good question. The answer, however, is not going to depend on whether there's a a temporal gap between the trying and the successful execution. You can see that if you consider some non-controversial cases of something's intending to do something. So suppose that I intend to type the letter "x". There's probably very little time between the formation of my intention and the initiation of the motor routine. (Note -- some neuroscientists and some philosophers think that there's empirical evidence that the initiation of at least some motor actions precedes the formation of the intention. It certainly seems to be that our awareness of the formation of an intention can come after the action has been initiated. But the matter is controversial.) In other cases, as for example when I form the intention to write a philosophy paper, there can be an extremely long gap between my forming the intention and my executing it. So timing is not the important factor.

What is important? First is what it is for something to have an intention; the next thing is what it takes for something to meet those requirements.

I think that an intention is the product of a desire for something and a belief about the means necessary to obtain it. So to have an intention, one must at least be the kind of thing that has beliefs and desires. That's a pretty neutral claim. Most contemporary philosophers will agree with it. Some philosophers will add that a thing also needs to be capable of action in order to have an intention. That's a little more controversial, depending on what's meant by "action". If mental actions count (like doing sums in one's head, or recalling the words to a song), then it's also pretty uncontroversial. But let's focus on the first necessary condition: beliefs and desires.

So: what does it take for something to have beliefs and desires? Here, you'll get different answers from different philosophers. But here's mine: I am a computationalist about the mind. I think that beliefs and desires are certain kinds of functional states, involving relations to representations. So I see no reason why a computer could not, in principle, have beliefs and desires. But there's another requirement for something to have a mind, and that's that the representations have to have genuine meaning (confusingly, philosophers use the term "intentionality" to mean "genuine meaning" as well as to mean "being in a state related to intentions"). Currently existing computers operate with merely formal representations -- any meaning the representations have is meaning that we, the designers and users, choose to impute to it.

Now as I said, philosophers are going to disagree about all the elements of my view. But the main thing is that most philosophers do think that being able to form intentions requires having a mind, and they think, further, that it is a real fact about the world that some things have minds and some things don't. An interesting exception is Daniel Dennett. He denies that there is any specific property or form of organization that is necessary for something to have an intention; he thinks that as long as a thing's activity can be usefully described in intentional terms -- terms like "believe," "desire" and "intend," then that thing can be truly said to have intentions. As he puts it, for a being or a system to have mental states is for it to be fruitful for an observer to take the "intentional stance" toward that being or system. What's crucial about the pattern of activity, what makes it interpretable as intentional, is that the activity looks rational. So, for example, if you are playing chess or hearts (more my speed) with a computer, you might find yourself wondering what move the computer "is thinking about making". And the way you might think about this is by pretending that the computer knows certain things -- the rules of the game, the moves that have already been made, the moves that are open to it -- and wanting certain things -- to win the game -- and then figuring out what any rational being would decide to do in those circumstances. What Dennett would say is that if you are able to sustain play this way -- if what the computer does continues to look rational in light of what you are pretending to be its beliefs and desires, then you are not pretending. All it is for the computer to be really intending things, Dennett would say, is for its behavior to display a pattern that makes it fruitful to attribute beliefs and desires and intentions to it.

So Dennett might well say that the computer in your example is intending to execute the code, regardless of it satisfies all those other conditions I gave. But he'd want to know more about the computer's behavior, to see whether the pattern supports our taking the intentional stance. But again -- the time between intention formation and execution is not pertinent.

Dennett, by the way, thinks that nature itself is an intentional agent, because we can think of natural selection as a rational process. And some philosophers, like Deborah Tollefson, who agree with Dennett about intentionality in general, think that groups of agents, things like the Supreme Court, can literally have intentions. I think that, whether or not one wants to use the term "intention" the way Dennett recommends, there's still going to be a difference between the kinds of beings that satisfy the conditions I sketched and those that don't, and that the difference is important for lots of reasons.

But still -- nothing depends on time!

  • Log in to post comments
Source URL: https://askphilosophers.org/question/4782
© 2005-2025 AskPhilosophers.org