The AskPhilosophers logo.

Mathematics

One of the obvious ways computers are limited is in their representation of numbers. Since computers represent numbers as bit strings of finite length, they can only represent finitely many, and to a finite degree of precision. Is it a mistake to think the humans, unlike computers, can represent infinitely many numbers with arbitrary precision? We obviously talk about things like the set of all real numbers; and we make use of symbols, like the letter pi, which purport to represent certain irrational numbers exactly. But then I'm not sure whether things like this really do show that we can represent numbers in a way that is fundamentally beyond computers.
Accepted:
April 25, 2014

Comments

Andrew Pessin
May 16, 2014 (changed May 16, 2014) Permalink

This one is basically above my pay grade, but I'll take a stab. I share your doubt that humans "can represent infinitely many numbers with arbitrary precision" in any way beyond what we find with computers. After all, our own hardware (our brain) is finite in the same ways/senses as are computers, so if sheer finitude establishes the limits of representation it's hard to see why we would differ from computers. If, on the other hand, you're imagining this as an argument for dualism -- i.e. our minds are distinct from our brains because they have infinite capacity in a way that our brains don't -- then you would definitely first have to prove the infinite capacity of our minds. Simply writing or thinking "pi" isn't enough; the fact that "pi" represents something infinitely expandable/expanded doesn't make the symbol "pi" infinite. The clearest proof would be if we could grasp (say) the complete infinite expansion of pi in one mental glance -- but we can't. At best we can grasp THAT the expansion goes on to infinity, just as we can grasp THAT the natural numbers go on to infinity. That's at least one important sense in which we have a concept of infinite, in which our minds represent the infinite -- and while philosophers such as Descartes/Malebranche might invoke that in their argument for dualism, it doesn't strike me as very convincing. Realizing "I can always add 1" just doesn't strike me as a thought that is interestingly infinite in content, rather merely as one that refers to the infinite without fully capturing it. And as Aristotle suggested, we must distinguish the "possibly infinite" from the "actually infinite" -- when we grasp that some sequence "goes on to infinity" we are grasping that, were we to actually survive to infinity, we would not complete (computing) the sequence -- but that is something less than saying the infinite actually exists. As far as I know, computers are just as able to represent the infinite as we are (in this sense), and this sense falls short of supporting dualism. Putting all that together, I don't think we've got a case for distinguishing the capacities/nature of minds v computers this way.

  • Log in to post comments
Source URL: https://askphilosophers.org/question/5556
© 2005-2025 AskPhilosophers.org