When we write "one and a half" in decimal notation as 1.5 .. do we really mean 1.5000... (with an infinity of zeros?) If so, how do we know there's no "3" lurking at the 10 millionth decimal place? Is this a problem of converting an analogue world into digital mathematics?
Yes, "one and a half" means 1.5000..., with infinitely many zeros. How do we know there's no "3" lurking at the 10 millionth decimal place? Well, it depends on how the number is being specified. If you are the one specifying the number, and you say that the number you have in mind is one and a half (exactly), or you say that it is 1.5000... and you add (as you did) that the decimal expansion you have in mind is the one in which the zeros go on forever, then you've already answered your own question--you have specified that there's no 3 lurking at the 10 millionth decimal place. If, on the other hand, the number was the result of a physical measurement, then there will surely be some amount of error in the measurement. So if the readout on the measuring instrument says "1.5000", then most likely the quantity being measured is not exactly one and a half, and there is some nonzero digit lurking somewhere--perhaps in the very next digit. If someone else tells you something about a number, and he...
- Log in to post comments