What is so wrong with thinking of real numbers as infinite decimals?












18














Timothy Gowers asks What is so wrong with thinking of real numbers as infinite decimals?




One of the early objectives of almost any university mathematics course is to teach people to stop thinking of the real numbers as infinite decimals and to regard them instead as elements of the unique complete ordered field, which can be shown to exist by means of Dedekind cuts, or Cauchy sequences of rationals. I would like to argue here that there is nothing wrong with thinking of them as infinite decimals: indeed, many of the traditional arguments of analysis become more intuitive when one does, even if they are less neat. Neatness is of course a great advantage, and I do not wish to suggest that universities should change the way they teach the real numbers. However, it is good to see how the conventional treatment is connected to, and grows out of, more `naive' ideas.




and gives a short construction of the real numbers as infinite decimals, then using that to demonstrate the existence of square roots, and the intermediate value theorem.



What are other reasons for or against thinking of real numbers as infinite decimals?










share|cite|improve this question




















  • 37




    I can think of lots of university mathematics courses that don't have that objective...
    – Robert Israel
    Dec 26 '16 at 23:59






  • 11




    A real number is...a real number. A decimal representation is only a representation of a real number. You are quite confusing a thing with its representation.
    – Paul
    Dec 27 '16 at 0:51








  • 8




    @Paul: I disagree. A real number isn't...a real number until you define what you mean by "real number". You can define real numbers as Dedekind cuts, or as equivalence classes of Cauchy sequences, or as infinite sequences of decimal digits. It can be shown that each of these three definitions leads to a complete ordered field, and that they are therefore equivalent; but they are different definitions.
    – TonyK
    Dec 27 '16 at 2:19








  • 6




    @TonyK, one thing we need to be careful about is the two different representations of the very same real number: $$ 0.00099999999999999... $$ vs. $$ 0.00100000000000000... $$
    – robert bristow-johnson
    Dec 27 '16 at 5:25








  • 12




    -1. This question is copied (almost) verbatim, without attribution, from the introduction to this blog post by Tim Gowers: dpmms.cam.ac.uk/~wtg10/decimals.html
    – Hans Lundmark
    Dec 27 '16 at 12:34
















18














Timothy Gowers asks What is so wrong with thinking of real numbers as infinite decimals?




One of the early objectives of almost any university mathematics course is to teach people to stop thinking of the real numbers as infinite decimals and to regard them instead as elements of the unique complete ordered field, which can be shown to exist by means of Dedekind cuts, or Cauchy sequences of rationals. I would like to argue here that there is nothing wrong with thinking of them as infinite decimals: indeed, many of the traditional arguments of analysis become more intuitive when one does, even if they are less neat. Neatness is of course a great advantage, and I do not wish to suggest that universities should change the way they teach the real numbers. However, it is good to see how the conventional treatment is connected to, and grows out of, more `naive' ideas.




and gives a short construction of the real numbers as infinite decimals, then using that to demonstrate the existence of square roots, and the intermediate value theorem.



What are other reasons for or against thinking of real numbers as infinite decimals?










share|cite|improve this question




















  • 37




    I can think of lots of university mathematics courses that don't have that objective...
    – Robert Israel
    Dec 26 '16 at 23:59






  • 11




    A real number is...a real number. A decimal representation is only a representation of a real number. You are quite confusing a thing with its representation.
    – Paul
    Dec 27 '16 at 0:51








  • 8




    @Paul: I disagree. A real number isn't...a real number until you define what you mean by "real number". You can define real numbers as Dedekind cuts, or as equivalence classes of Cauchy sequences, or as infinite sequences of decimal digits. It can be shown that each of these three definitions leads to a complete ordered field, and that they are therefore equivalent; but they are different definitions.
    – TonyK
    Dec 27 '16 at 2:19








  • 6




    @TonyK, one thing we need to be careful about is the two different representations of the very same real number: $$ 0.00099999999999999... $$ vs. $$ 0.00100000000000000... $$
    – robert bristow-johnson
    Dec 27 '16 at 5:25








  • 12




    -1. This question is copied (almost) verbatim, without attribution, from the introduction to this blog post by Tim Gowers: dpmms.cam.ac.uk/~wtg10/decimals.html
    – Hans Lundmark
    Dec 27 '16 at 12:34














18












18








18


9





Timothy Gowers asks What is so wrong with thinking of real numbers as infinite decimals?




One of the early objectives of almost any university mathematics course is to teach people to stop thinking of the real numbers as infinite decimals and to regard them instead as elements of the unique complete ordered field, which can be shown to exist by means of Dedekind cuts, or Cauchy sequences of rationals. I would like to argue here that there is nothing wrong with thinking of them as infinite decimals: indeed, many of the traditional arguments of analysis become more intuitive when one does, even if they are less neat. Neatness is of course a great advantage, and I do not wish to suggest that universities should change the way they teach the real numbers. However, it is good to see how the conventional treatment is connected to, and grows out of, more `naive' ideas.




and gives a short construction of the real numbers as infinite decimals, then using that to demonstrate the existence of square roots, and the intermediate value theorem.



What are other reasons for or against thinking of real numbers as infinite decimals?










share|cite|improve this question















Timothy Gowers asks What is so wrong with thinking of real numbers as infinite decimals?




One of the early objectives of almost any university mathematics course is to teach people to stop thinking of the real numbers as infinite decimals and to regard them instead as elements of the unique complete ordered field, which can be shown to exist by means of Dedekind cuts, or Cauchy sequences of rationals. I would like to argue here that there is nothing wrong with thinking of them as infinite decimals: indeed, many of the traditional arguments of analysis become more intuitive when one does, even if they are less neat. Neatness is of course a great advantage, and I do not wish to suggest that universities should change the way they teach the real numbers. However, it is good to see how the conventional treatment is connected to, and grows out of, more `naive' ideas.




and gives a short construction of the real numbers as infinite decimals, then using that to demonstrate the existence of square roots, and the intermediate value theorem.



What are other reasons for or against thinking of real numbers as infinite decimals?







real-numbers






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 27 '16 at 20:34


























community wiki





3 revs, 3 users 44%
TripleA









  • 37




    I can think of lots of university mathematics courses that don't have that objective...
    – Robert Israel
    Dec 26 '16 at 23:59






  • 11




    A real number is...a real number. A decimal representation is only a representation of a real number. You are quite confusing a thing with its representation.
    – Paul
    Dec 27 '16 at 0:51








  • 8




    @Paul: I disagree. A real number isn't...a real number until you define what you mean by "real number". You can define real numbers as Dedekind cuts, or as equivalence classes of Cauchy sequences, or as infinite sequences of decimal digits. It can be shown that each of these three definitions leads to a complete ordered field, and that they are therefore equivalent; but they are different definitions.
    – TonyK
    Dec 27 '16 at 2:19








  • 6




    @TonyK, one thing we need to be careful about is the two different representations of the very same real number: $$ 0.00099999999999999... $$ vs. $$ 0.00100000000000000... $$
    – robert bristow-johnson
    Dec 27 '16 at 5:25








  • 12




    -1. This question is copied (almost) verbatim, without attribution, from the introduction to this blog post by Tim Gowers: dpmms.cam.ac.uk/~wtg10/decimals.html
    – Hans Lundmark
    Dec 27 '16 at 12:34














  • 37




    I can think of lots of university mathematics courses that don't have that objective...
    – Robert Israel
    Dec 26 '16 at 23:59






  • 11




    A real number is...a real number. A decimal representation is only a representation of a real number. You are quite confusing a thing with its representation.
    – Paul
    Dec 27 '16 at 0:51








  • 8




    @Paul: I disagree. A real number isn't...a real number until you define what you mean by "real number". You can define real numbers as Dedekind cuts, or as equivalence classes of Cauchy sequences, or as infinite sequences of decimal digits. It can be shown that each of these three definitions leads to a complete ordered field, and that they are therefore equivalent; but they are different definitions.
    – TonyK
    Dec 27 '16 at 2:19








  • 6




    @TonyK, one thing we need to be careful about is the two different representations of the very same real number: $$ 0.00099999999999999... $$ vs. $$ 0.00100000000000000... $$
    – robert bristow-johnson
    Dec 27 '16 at 5:25








  • 12




    -1. This question is copied (almost) verbatim, without attribution, from the introduction to this blog post by Tim Gowers: dpmms.cam.ac.uk/~wtg10/decimals.html
    – Hans Lundmark
    Dec 27 '16 at 12:34








37




37




I can think of lots of university mathematics courses that don't have that objective...
– Robert Israel
Dec 26 '16 at 23:59




I can think of lots of university mathematics courses that don't have that objective...
– Robert Israel
Dec 26 '16 at 23:59




11




11




A real number is...a real number. A decimal representation is only a representation of a real number. You are quite confusing a thing with its representation.
– Paul
Dec 27 '16 at 0:51






A real number is...a real number. A decimal representation is only a representation of a real number. You are quite confusing a thing with its representation.
– Paul
Dec 27 '16 at 0:51






8




8




@Paul: I disagree. A real number isn't...a real number until you define what you mean by "real number". You can define real numbers as Dedekind cuts, or as equivalence classes of Cauchy sequences, or as infinite sequences of decimal digits. It can be shown that each of these three definitions leads to a complete ordered field, and that they are therefore equivalent; but they are different definitions.
– TonyK
Dec 27 '16 at 2:19






@Paul: I disagree. A real number isn't...a real number until you define what you mean by "real number". You can define real numbers as Dedekind cuts, or as equivalence classes of Cauchy sequences, or as infinite sequences of decimal digits. It can be shown that each of these three definitions leads to a complete ordered field, and that they are therefore equivalent; but they are different definitions.
– TonyK
Dec 27 '16 at 2:19






6




6




@TonyK, one thing we need to be careful about is the two different representations of the very same real number: $$ 0.00099999999999999... $$ vs. $$ 0.00100000000000000... $$
– robert bristow-johnson
Dec 27 '16 at 5:25






@TonyK, one thing we need to be careful about is the two different representations of the very same real number: $$ 0.00099999999999999... $$ vs. $$ 0.00100000000000000... $$
– robert bristow-johnson
Dec 27 '16 at 5:25






12




12




-1. This question is copied (almost) verbatim, without attribution, from the introduction to this blog post by Tim Gowers: dpmms.cam.ac.uk/~wtg10/decimals.html
– Hans Lundmark
Dec 27 '16 at 12:34




-1. This question is copied (almost) verbatim, without attribution, from the introduction to this blog post by Tim Gowers: dpmms.cam.ac.uk/~wtg10/decimals.html
– Hans Lundmark
Dec 27 '16 at 12:34










9 Answers
9






active

oldest

votes


















43














There is nothing wrong with sometimes thinking of real numbers as infinite decimals, and indeed this perspective is useful in some contexts. There are a few reasons that introductory real analysis courses tend to push students to not think of real numbers this way.



First, students are typically already familiar with this perspective on real numbers, but are not familiar with other perspectives that are more useful and natural most of the time in advanced mathematics. So it is not especially necessary to teach students about real numbers as infinite decimals, but it is necessary to teach other perspectives, and to teach students to not exclusively (or even primarily) think about real numbers as infinite decimals.



Second, a major goal of many such courses is to rigorously develop the theory of the real numbers from "first principles" (e.g., in a naive set theory framework). Students who are familiar with real numbers as infinite decimals are almost never familiar with them in a truly rigorous way. For instance, do they really know how to rigorously define how to multiply two infinite decimals? Almost certainly not, and most of them would have a lot of difficulty doing so even if they tried to. It is possible to give a completely rigorous construction of the real numbers as infinite decimals, but it is not particularly easy or enlightening to do so (in comparison with other constructions of the real numbers). In any case, if you are constructing the real numbers rigorously from scratch, that means you need to "forget" everything you already "knew" about real numbers. So students need to be told to not assume facts about real numbers based on whatever naive understanding they might have had previously.



Third, it is misleading to describe infinite decimals as the basic "naive" understanding of the real numbers. It is unfortunately often the main understanding that is taught in grade school, but this emphasis obscures the fact that ultimately the motivation for real numbers is the intuitive idea of measuring non-discrete quantities, such as geometric lengths. When you think about real numbers this way, they are much more closely related to the concept of a "complete ordered field" than they are to the concept of infinite decimals. Ancient mathematicians reasoned about numbers in this way for centuries without the modern decimal notation for them. So actually the idea of representing numbers by infinite decimals is not at all a simple "naive" idea but a complicated and quite clever idea (which has some important subtleties, such as the fact that two different decimal expansions can represent the same number). It's kind of just an accident that nowadays students are taught about this perspective on real numbers long before any others.






share|cite|improve this answer































    25














    One objection to thinking of real numbers as "infinite decimals" is that it lends itself to thinking of the distinction between rational and irrational numbers as being primarily about whether the decimals repeat or not. This in turn leads to some very problematic misunderstandings, such as the one in the image below:
    enter image description here



    Yes, you read it right: the author of this book thinks that $8/23$ is an irrational number, because there is no pattern to the digits.



    Now it is easy to dismiss this as simple ignorance: of course the digits do repeat, but you have to go further out in the decimal sequence before this becomes visible. But once you notice this, you start to recognize all kinds of problems with the "non-repeating decimal" notion of irrational number: How can you ever tell, by looking at a decimal representation of a real number, whether it is rational or irrational? After all, the most we can ever see is finitely many digits; maybe the repeating portion starts after the part we are looking at? Or maybe what looks like a repeating decimal (say $0.6666dots$) turns out to have a 3 in the thirty-fifth decimal place, and thereafter is non-repeating?



    Now obviously these problems can be circumvented by a more precise notion of "rational": A rational number is one that can be expressed as a ratio of integers, an irrational number is one that cannot be. But you would be surprised how resilient the misconception shown in the image above can be. Related errors are pervasive: I am sure I am not the only one who has seen students use a calculator to get a decimal approximation of some irrational number (say for example $log 2$) and several steps later use a "convert to fraction" command on their graphing calculator to express a string of digits as some close-but-not-equal rational number.



    If you really want to get students away from these kinds of mistakes, at some point you have to provide them with a notion of "number" that is independent of the decimal representation of that number.






    share|cite|improve this answer



















    • 12




      And which (mathematically) idiot publisher printed this book?
      – user21820
      Dec 27 '16 at 9:27






    • 3




      It appears to be page 20 of Cool Math by Christy Maganzini, published in 1997 by Price Stern Sloan. But the answerer's point is that this sort of misunderstanding is widespread, so let's not single this out too much.
      – JdeBP
      Dec 27 '16 at 13:23








    • 2




      The book author was also wrong to claim that there is no pattern to the digits of $8/23$. Apparently she wasn't aware that the decimal representation has period 22.
      – user1551
      Dec 27 '16 at 14:32






    • 11




      $frac{8}{23}$ is now my favorite irrational.
      – Jeppe Stig Nielsen
      Dec 27 '16 at 16:28






    • 5




      @JeppeStigNielsen Mine too. And $pi$ is my favorite rational (since as well all know if you measure a circle's circumference and divide it by the diameter, you get $pi$.)
      – mweiss
      Dec 27 '16 at 17:22



















    16














    For the same reason that it is incorrect to think of linear transformations as matrices, or to think of real $n$-dimensional vector spaces as just $mathbb{R}^{n}.$



    What's so special about base $10$? Why not binary numbers or ternary numbers? In particular, ternary numbers would come in handy for understanding Cantor ternary sets. But apparently we should think in decimals?



    This non-canonical choice is unnecessary, aesthetically displeasing, and distracts from certain intuitions to be gained from, say, the geometric view of the reals as points on a line. It's better to think of the real numbers as what they are: A system of things (what are those things? Equivalence classes of sequences of rationals? Points on a line? It doesn't matter) that have certain very nice properties. Just like vectors are just elements of vector spaces: Objects that obey certain rules. Shoving extra stuff in there distracts from the mathematics.






    share|cite|improve this answer



















    • 1




      Good answer, it is just a question about how much the model or interpretation suits to particular part of the mathematical object.
      – z100
      Dec 27 '16 at 1:51






    • 1




      By coincidence, this is a hot question at the same time I answered this question.
      – Will R
      Dec 27 '16 at 1:58








    • 2




      I think in the first sentence it should be "to think of linear transformations as matrices"?
      – Paŭlo Ebermann
      Dec 27 '16 at 2:17






    • 2




      @PauloEbermann: What's the difference? Given a matrix, you don't know the transformation without being given the bases. Given a transformation, you can't write down a matrix unless you choose two bases. Am I mistaken?
      – Will R
      Dec 27 '16 at 2:20








    • 1




      I really wish either I had waited until you posted this answer, or that you had posted this answer an hour earlier than you did. I really like how you described this!
      – user304051
      Dec 27 '16 at 3:49



















    11














    I think that teaching reals as infinite decimals in the first place is a mistake arising out of lack of a better description.



    Here is how I present the problem of infinite decimals. How for example does one define the sum of two infinite decimals ? One is supposed to start to add decimals at the right and work to the left, as in elementary school. But there is no right most digit. So how can we define addition ? Well, OK, to salvage the idea one takes all truncations add them and then prove that they stabilize. That is we must prove that they converge. This can be done, but is is kind of messy. How do we then define multiplication ? Same thing but worse. Imagine proving the distributive law. In any case you have arrived by necessity at the concept of a Cauchy sequence.



    The other reason is that infinite decimals lacks the geometric intuition necessary for understanding the topology of the reals.






    share|cite|improve this answer



















    • 2




      I'm curious, and forgive me if this is a simple question....but couldn't a series representation be used in place of a truncated form, and allow showing convergence to be more straightforward? And if a series representation is possible, wouldn't the relative summation and product spaces be more or less straightforward to express? I guess this could just as easily lead to Cauchy sequence though...
      – user304051
      Dec 27 '16 at 1:15






    • 1




      But what would be a better description? As infinite decimals do seem a bad way -- look at how many people ask, over and over "does 0.999999... = 1?" "why does 0.999999... = 1?" "isn't 0.999999.... just a 'little bit less than' 1?" and so forth...
      – The_Sympathizer
      Dec 27 '16 at 1:29






    • 1




      @floorcat A series is the same thing as an infinite decimal.
      – Rene Schipperus
      Dec 27 '16 at 1:54






    • 1




      I was confused because most of the comments/answers use the actual non-terminating decimal as opposed to what I consider a series representation form, e.g. summation or product space notation.
      – user304051
      Dec 27 '16 at 2:01



















    9














    Decimal notation for general real numbers is not universally intuitive. It has a number of problems:




    • Arithmetic with nonterminating decimals has unfamiliar complications, because people are used to doing arithmetic from the right end.

    • Some people have great difficulty accepting that representation is not unique

    • Some people don't grasp the infinite nature, and instead imagine a decimal as simply having a large but finite number of digits — sometimes with the belief that decimals are inherently approximate and cannot represent a number exactly.

    • Some people fail to grasp the infinite nature in a different way, only being able to conceptualize a sequence of terminating decimals


    It even has severe philosophical problems; e.g. they are ill-suited for various constructive approaches to mathematics. You can't compute even the first digit of $.333ldots + .666ldots$ unless you can prove that a carry does or does not propagate in from the right. (or you can prove it is a special case, such as the numbers actually being $1/3 + 2/3$, rather than some unknown sequence you actually have to generate to find its digits)



    There are pedagogical problems as well; the theory behind decimals arises from that of infinite summations, but calculus and analysis courses tend to prefer to start with limits, continuity, and similar topological notions.





    Incidentally, I believe I have seen at least one text whose proof that a complete ordered field exists is by showing the arithmetic on decimal numbers is such.






    share|cite|improve this answer































      6














      The existing answers are great but have not addressed the following interesting part of the question:




      Isn't it good to see how the conventional treatment is connected to, and grows out of, more 'naive' ideas?




      Namely, I shall show how the idea of decimal representation leads naturally and inexorably to the idea of equivalence classes of (regular) Cauchy sequences.



      First, one needs to be extremely careful because decimal representations of reals are not unique, and it is a chore even to define the basic arithmetic operations on decimals. For instance, to test whether two decimals are equal is no longer obvious, and simply defining away the problem (like stipulating that decimals cannot have endless repeating '9's) does not help! Surprised? Consider $3 times 0.333overline{3}$. Ordinary multiplication yields $0.999overline{9}$, so the easiest way to resolve this issue is to invoke a canonization at the end of each arithmetic operation. Let me spell out this process in full: If the result ends with "$overline{9}$", change all those digits to "$overline{0}$" and add $1$ to the preceding digit, as usual carrying over if needed. Not to forget the numerous cases due to the sign (positive, negative, zero).



      But look! The very idea of canonization corresponds to the idea of picking out representatives under an equivalence relation over the decimals. But then a natural question arises: Why does this equivalence relation seem so weird? It considers $0.999overline{9}$ equivalent to $1.000overline{0}$, but why not any other 'endings'? Interestingly, we can understand it better by using an alternative definition that decimals $x,y$ are equivalent iff $x-y$ using ordinary subtraction (without canonization) is $0.overline{0}$. Addition and subtraction here must be defined for non-negative decimals such that we always subtract the smaller from the larger before attaching the correct sign, and then defined for other signs by the usual cases. This definition shows concretely that except for the equivalences involving "$overline{9}$" and "$overline{0}$" described above, any other pair of decimals have a nonzero difference.



      But note that in the subtraction algorithm we needed to know which decimal is larger. We could define non-strictly larger by the usual comparison algorithm, but what does this comparison really mean? Actually this is easily answered if one thinks carefully about the meaning of decimals in the first place.




      • "$3.cdots$" means some amount in the range from $3$ to $4$.


      • "$3.1cdots$" means some amount in the range from $3.1$ to $3.2$.


      • "$3.14cdots$" means some amount in the range from $3.14$ to $3.15$.



      In short:




      • $3.cdots in [3,4]$.


      • $3.1cdots in [3.1,3.2]$.


      • $3.14cdots in [3.14,3.15]$.



      This is precisely what guides us to invent the comparison algorithm. Also, notice that each decimal, being a sequence of digits, corresponds exactly to a sequence of intervals that narrows with each step, such that the $k$-th interval in the sequence has width $10^{-k}$. This very nicely corresponds to one type of (regular) Cauchy sequences!



      Observe now that the definition of Cauchy sequence is neater and easier to use simply because it discards the implementation details, which in the case of decimals includes the carry-over parts of the algorithms and the specific base-10 format and the strong convergence guarantee. Otherwise there really is no difference between decimals and Cauchy sequences. Finally, using the equivalence classes as real numbers is just an alternative to picking a canonical representative from each class.






      share|cite|improve this answer































        5














        The infinite decimal interpretation of $mathbb{R}$ leads to problems:
        $$ 0.49999dots = 0.5
        $$
        If you are trying to find a bijection $left[0,1right[ to left[0,1right[ times left[0,1right[$, you might try:
        $$ 0.abababababdots mapsto (0.aaaaaaadots,0.bbbbbbbdots)
        $$
        which does not work due to the existence of $0.4999dots$.






        share|cite|improve this answer



















        • 6




          How is this a "problem", exactly?
          – Eric Wofsey
          Dec 27 '16 at 0:04






        • 9




          @EricWofsey It gives the incorrect intuition that $0.4999dots$ is different from $0.5$.
          – Henricus V.
          Dec 27 '16 at 0:05






        • 1




          Intuition is neither fixed nor objective. It's certainly not intuitive that an operation can be noncommutative, for instance, until one has learned enough that it becomes intuitive. Further, in this particular case, it arises from incorrect understanding, so of course there's a problem, but it's not caused by what you imply it to be.
          – Nij
          Dec 27 '16 at 2:43






        • 3




          I think this may be one of the key reasons why you don't think of reals as infinite decimals. Infinite decimals have more than one representation for any number (such as .4999... and .5 given here). This can lead to false proofs. For example, if you need to pick a number smaller than .5, it is not immediately obvious that .499... is not such a number. By the time you understand enough to understand why this is the case, you might as well have learned to think of the reals properly. Thinking of them as infinite decimals didn't help.
          – Cort Ammon
          Dec 27 '16 at 3:28






        • 2




          this distinction is necessary when considering the proof that the real numbers between 0 and 1 are uncountable.
          – robert bristow-johnson
          Dec 27 '16 at 5:27



















        2














        One answer to




        What is so wrong with thinking of real numbers as infinite decimals?




        is that it's somewhat old fashioned. Another is that it doesn't work really well in more advanced courses in analysis. But Courant's calculus, one of the best texts ever, shows this in Section 2 of the first chapter:



        enter image description here



        That's a screenshot from page 8; you can see it at



        https://archive.org/stream/DifferentialIntegralCalculusVolI/Courant-DifferentialIntegralCalculusVolI#page/n23/mode/2up






        share|cite|improve this answer



















        • 1




          @amWhy I think it's an interesting bit of history that addresses the OP's question, which is why I posted it. Of course one can link to the full text of the book - if you know that it contains something relevant. Feel free to downvote.
          – Ethan Bolker
          Dec 27 '16 at 0:25






        • 1




          I don't question that. But it remains little more than a "link-only" answer.
          – amWhy
          Dec 27 '16 at 0:29








        • 1




          I'm not trying to be mean. And I haven't flagged the answer, nor voted to delete it as a link-only answer. I'm trying to inform you, that's all.
          – amWhy
          Dec 27 '16 at 0:32






        • 1




          @amWhy I don't find your comments mean (and shouldn't have been snarky about downvote). I too dislike link only answers. I agree that this one's a close call. We just came down on different sides.
          – Ethan Bolker
          Dec 27 '16 at 0:49



















        0














        I think it's actually good to teach people that definition of a real number when they're in university and old enough to be able to understand what they're being taught. After reading https://www.inc.com/bill-murphy-jr/science-says-were-sending-our-kids-to-school-much-too-early-and-that-can-hurt-th.html, I think that kids can learn from school better if they wait until they're older to start grade 1 so I think it's also true that when they're older, they're more able to learn that definition of a real number and the reason the teachers is using that definition. I know people probably sometimes learn something too young and misunderstand what the teacher was trying to teach them then later, they have trouble breaking their old habits of what they thought they were getting taught. For example, those who got taught what a fraction is and how to add, subtract, multiply, and divide fractions at such a young age might think the rational numbers are all the numbers and have trouble later breaking their old habits and learning that irrational numbers exist. Maybe people didn't quite get taught the decimal representation of a real number properly in elementry school. I think they should be a taught a very similar definition in university. I had an intuitive idea of some of its properties of a complete ordered field but did not think of the property of completeness on my own when I was in elementry school. Later, I once tried to figure out what a real number is and then came up with my own definition. I first constructed the dyadic rationals, the terminating decimals in base 2 then for each cut that has no boundary position as a number in that set, invented a number to lie between those cuts and saw that it does something like correspond to base 2 decimal notations that forbid trailing 1's. I don't think that once people are in university, they should be taught to derive properties from the fact that $(mathbb{R}, 1, 0, +, times, leq)$ is a complete ordered field. I know some people are thinking they can't break their old habits of thinking a decimal representation is a real number when actually the real numbers already existed and somebody invented a notation for each of them. I think it's more important to teach them not to make the unjustified assumption that the real numbers with those operations have those properties of a complete ordered field they did think of. Construction from the Dedekind cuts of the terminating decimals actually works well and from that gives and intuitive way of deciding how to represent each of them but that forbids a string of trailing 9's, different from what I was taught in elementry school that 0.999... = 1. They can then be taught what a complete ordered field is with Modern Algebra being a prerequisite to that course and be taught that +, $times$, and $leq$ have been defined and $(mathbb{R}, 1, 0, +, times, leq)$ has been proven in ZF to be a complete ordered field which is unique up to isomorphism, and that it's even easier to show that it's isomorphic to the base 2 construction than that it's unique up to isomorphism. Maybe that course itself could be a prerequisite to an even later course that teaches how to write a formal proof in ZF where one of the test questions asks you to write a formal proof in ZF that there exists a complete ordered field which is unique up to isomorphism and only those who can figure out how to write a complete formal proof will get full marks so that a job will take on average better applicants.






        share|cite|improve this answer























          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2073041%2fwhat-is-so-wrong-with-thinking-of-real-numbers-as-infinite-decimals%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          9 Answers
          9






          active

          oldest

          votes








          9 Answers
          9






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          43














          There is nothing wrong with sometimes thinking of real numbers as infinite decimals, and indeed this perspective is useful in some contexts. There are a few reasons that introductory real analysis courses tend to push students to not think of real numbers this way.



          First, students are typically already familiar with this perspective on real numbers, but are not familiar with other perspectives that are more useful and natural most of the time in advanced mathematics. So it is not especially necessary to teach students about real numbers as infinite decimals, but it is necessary to teach other perspectives, and to teach students to not exclusively (or even primarily) think about real numbers as infinite decimals.



          Second, a major goal of many such courses is to rigorously develop the theory of the real numbers from "first principles" (e.g., in a naive set theory framework). Students who are familiar with real numbers as infinite decimals are almost never familiar with them in a truly rigorous way. For instance, do they really know how to rigorously define how to multiply two infinite decimals? Almost certainly not, and most of them would have a lot of difficulty doing so even if they tried to. It is possible to give a completely rigorous construction of the real numbers as infinite decimals, but it is not particularly easy or enlightening to do so (in comparison with other constructions of the real numbers). In any case, if you are constructing the real numbers rigorously from scratch, that means you need to "forget" everything you already "knew" about real numbers. So students need to be told to not assume facts about real numbers based on whatever naive understanding they might have had previously.



          Third, it is misleading to describe infinite decimals as the basic "naive" understanding of the real numbers. It is unfortunately often the main understanding that is taught in grade school, but this emphasis obscures the fact that ultimately the motivation for real numbers is the intuitive idea of measuring non-discrete quantities, such as geometric lengths. When you think about real numbers this way, they are much more closely related to the concept of a "complete ordered field" than they are to the concept of infinite decimals. Ancient mathematicians reasoned about numbers in this way for centuries without the modern decimal notation for them. So actually the idea of representing numbers by infinite decimals is not at all a simple "naive" idea but a complicated and quite clever idea (which has some important subtleties, such as the fact that two different decimal expansions can represent the same number). It's kind of just an accident that nowadays students are taught about this perspective on real numbers long before any others.






          share|cite|improve this answer




























            43














            There is nothing wrong with sometimes thinking of real numbers as infinite decimals, and indeed this perspective is useful in some contexts. There are a few reasons that introductory real analysis courses tend to push students to not think of real numbers this way.



            First, students are typically already familiar with this perspective on real numbers, but are not familiar with other perspectives that are more useful and natural most of the time in advanced mathematics. So it is not especially necessary to teach students about real numbers as infinite decimals, but it is necessary to teach other perspectives, and to teach students to not exclusively (or even primarily) think about real numbers as infinite decimals.



            Second, a major goal of many such courses is to rigorously develop the theory of the real numbers from "first principles" (e.g., in a naive set theory framework). Students who are familiar with real numbers as infinite decimals are almost never familiar with them in a truly rigorous way. For instance, do they really know how to rigorously define how to multiply two infinite decimals? Almost certainly not, and most of them would have a lot of difficulty doing so even if they tried to. It is possible to give a completely rigorous construction of the real numbers as infinite decimals, but it is not particularly easy or enlightening to do so (in comparison with other constructions of the real numbers). In any case, if you are constructing the real numbers rigorously from scratch, that means you need to "forget" everything you already "knew" about real numbers. So students need to be told to not assume facts about real numbers based on whatever naive understanding they might have had previously.



            Third, it is misleading to describe infinite decimals as the basic "naive" understanding of the real numbers. It is unfortunately often the main understanding that is taught in grade school, but this emphasis obscures the fact that ultimately the motivation for real numbers is the intuitive idea of measuring non-discrete quantities, such as geometric lengths. When you think about real numbers this way, they are much more closely related to the concept of a "complete ordered field" than they are to the concept of infinite decimals. Ancient mathematicians reasoned about numbers in this way for centuries without the modern decimal notation for them. So actually the idea of representing numbers by infinite decimals is not at all a simple "naive" idea but a complicated and quite clever idea (which has some important subtleties, such as the fact that two different decimal expansions can represent the same number). It's kind of just an accident that nowadays students are taught about this perspective on real numbers long before any others.






            share|cite|improve this answer


























              43












              43








              43






              There is nothing wrong with sometimes thinking of real numbers as infinite decimals, and indeed this perspective is useful in some contexts. There are a few reasons that introductory real analysis courses tend to push students to not think of real numbers this way.



              First, students are typically already familiar with this perspective on real numbers, but are not familiar with other perspectives that are more useful and natural most of the time in advanced mathematics. So it is not especially necessary to teach students about real numbers as infinite decimals, but it is necessary to teach other perspectives, and to teach students to not exclusively (or even primarily) think about real numbers as infinite decimals.



              Second, a major goal of many such courses is to rigorously develop the theory of the real numbers from "first principles" (e.g., in a naive set theory framework). Students who are familiar with real numbers as infinite decimals are almost never familiar with them in a truly rigorous way. For instance, do they really know how to rigorously define how to multiply two infinite decimals? Almost certainly not, and most of them would have a lot of difficulty doing so even if they tried to. It is possible to give a completely rigorous construction of the real numbers as infinite decimals, but it is not particularly easy or enlightening to do so (in comparison with other constructions of the real numbers). In any case, if you are constructing the real numbers rigorously from scratch, that means you need to "forget" everything you already "knew" about real numbers. So students need to be told to not assume facts about real numbers based on whatever naive understanding they might have had previously.



              Third, it is misleading to describe infinite decimals as the basic "naive" understanding of the real numbers. It is unfortunately often the main understanding that is taught in grade school, but this emphasis obscures the fact that ultimately the motivation for real numbers is the intuitive idea of measuring non-discrete quantities, such as geometric lengths. When you think about real numbers this way, they are much more closely related to the concept of a "complete ordered field" than they are to the concept of infinite decimals. Ancient mathematicians reasoned about numbers in this way for centuries without the modern decimal notation for them. So actually the idea of representing numbers by infinite decimals is not at all a simple "naive" idea but a complicated and quite clever idea (which has some important subtleties, such as the fact that two different decimal expansions can represent the same number). It's kind of just an accident that nowadays students are taught about this perspective on real numbers long before any others.






              share|cite|improve this answer














              There is nothing wrong with sometimes thinking of real numbers as infinite decimals, and indeed this perspective is useful in some contexts. There are a few reasons that introductory real analysis courses tend to push students to not think of real numbers this way.



              First, students are typically already familiar with this perspective on real numbers, but are not familiar with other perspectives that are more useful and natural most of the time in advanced mathematics. So it is not especially necessary to teach students about real numbers as infinite decimals, but it is necessary to teach other perspectives, and to teach students to not exclusively (or even primarily) think about real numbers as infinite decimals.



              Second, a major goal of many such courses is to rigorously develop the theory of the real numbers from "first principles" (e.g., in a naive set theory framework). Students who are familiar with real numbers as infinite decimals are almost never familiar with them in a truly rigorous way. For instance, do they really know how to rigorously define how to multiply two infinite decimals? Almost certainly not, and most of them would have a lot of difficulty doing so even if they tried to. It is possible to give a completely rigorous construction of the real numbers as infinite decimals, but it is not particularly easy or enlightening to do so (in comparison with other constructions of the real numbers). In any case, if you are constructing the real numbers rigorously from scratch, that means you need to "forget" everything you already "knew" about real numbers. So students need to be told to not assume facts about real numbers based on whatever naive understanding they might have had previously.



              Third, it is misleading to describe infinite decimals as the basic "naive" understanding of the real numbers. It is unfortunately often the main understanding that is taught in grade school, but this emphasis obscures the fact that ultimately the motivation for real numbers is the intuitive idea of measuring non-discrete quantities, such as geometric lengths. When you think about real numbers this way, they are much more closely related to the concept of a "complete ordered field" than they are to the concept of infinite decimals. Ancient mathematicians reasoned about numbers in this way for centuries without the modern decimal notation for them. So actually the idea of representing numbers by infinite decimals is not at all a simple "naive" idea but a complicated and quite clever idea (which has some important subtleties, such as the fact that two different decimal expansions can represent the same number). It's kind of just an accident that nowadays students are taught about this perspective on real numbers long before any others.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Dec 27 '16 at 1:14


























              community wiki





              Eric Wofsey
























                  25














                  One objection to thinking of real numbers as "infinite decimals" is that it lends itself to thinking of the distinction between rational and irrational numbers as being primarily about whether the decimals repeat or not. This in turn leads to some very problematic misunderstandings, such as the one in the image below:
                  enter image description here



                  Yes, you read it right: the author of this book thinks that $8/23$ is an irrational number, because there is no pattern to the digits.



                  Now it is easy to dismiss this as simple ignorance: of course the digits do repeat, but you have to go further out in the decimal sequence before this becomes visible. But once you notice this, you start to recognize all kinds of problems with the "non-repeating decimal" notion of irrational number: How can you ever tell, by looking at a decimal representation of a real number, whether it is rational or irrational? After all, the most we can ever see is finitely many digits; maybe the repeating portion starts after the part we are looking at? Or maybe what looks like a repeating decimal (say $0.6666dots$) turns out to have a 3 in the thirty-fifth decimal place, and thereafter is non-repeating?



                  Now obviously these problems can be circumvented by a more precise notion of "rational": A rational number is one that can be expressed as a ratio of integers, an irrational number is one that cannot be. But you would be surprised how resilient the misconception shown in the image above can be. Related errors are pervasive: I am sure I am not the only one who has seen students use a calculator to get a decimal approximation of some irrational number (say for example $log 2$) and several steps later use a "convert to fraction" command on their graphing calculator to express a string of digits as some close-but-not-equal rational number.



                  If you really want to get students away from these kinds of mistakes, at some point you have to provide them with a notion of "number" that is independent of the decimal representation of that number.






                  share|cite|improve this answer



















                  • 12




                    And which (mathematically) idiot publisher printed this book?
                    – user21820
                    Dec 27 '16 at 9:27






                  • 3




                    It appears to be page 20 of Cool Math by Christy Maganzini, published in 1997 by Price Stern Sloan. But the answerer's point is that this sort of misunderstanding is widespread, so let's not single this out too much.
                    – JdeBP
                    Dec 27 '16 at 13:23








                  • 2




                    The book author was also wrong to claim that there is no pattern to the digits of $8/23$. Apparently she wasn't aware that the decimal representation has period 22.
                    – user1551
                    Dec 27 '16 at 14:32






                  • 11




                    $frac{8}{23}$ is now my favorite irrational.
                    – Jeppe Stig Nielsen
                    Dec 27 '16 at 16:28






                  • 5




                    @JeppeStigNielsen Mine too. And $pi$ is my favorite rational (since as well all know if you measure a circle's circumference and divide it by the diameter, you get $pi$.)
                    – mweiss
                    Dec 27 '16 at 17:22
















                  25














                  One objection to thinking of real numbers as "infinite decimals" is that it lends itself to thinking of the distinction between rational and irrational numbers as being primarily about whether the decimals repeat or not. This in turn leads to some very problematic misunderstandings, such as the one in the image below:
                  enter image description here



                  Yes, you read it right: the author of this book thinks that $8/23$ is an irrational number, because there is no pattern to the digits.



                  Now it is easy to dismiss this as simple ignorance: of course the digits do repeat, but you have to go further out in the decimal sequence before this becomes visible. But once you notice this, you start to recognize all kinds of problems with the "non-repeating decimal" notion of irrational number: How can you ever tell, by looking at a decimal representation of a real number, whether it is rational or irrational? After all, the most we can ever see is finitely many digits; maybe the repeating portion starts after the part we are looking at? Or maybe what looks like a repeating decimal (say $0.6666dots$) turns out to have a 3 in the thirty-fifth decimal place, and thereafter is non-repeating?



                  Now obviously these problems can be circumvented by a more precise notion of "rational": A rational number is one that can be expressed as a ratio of integers, an irrational number is one that cannot be. But you would be surprised how resilient the misconception shown in the image above can be. Related errors are pervasive: I am sure I am not the only one who has seen students use a calculator to get a decimal approximation of some irrational number (say for example $log 2$) and several steps later use a "convert to fraction" command on their graphing calculator to express a string of digits as some close-but-not-equal rational number.



                  If you really want to get students away from these kinds of mistakes, at some point you have to provide them with a notion of "number" that is independent of the decimal representation of that number.






                  share|cite|improve this answer



















                  • 12




                    And which (mathematically) idiot publisher printed this book?
                    – user21820
                    Dec 27 '16 at 9:27






                  • 3




                    It appears to be page 20 of Cool Math by Christy Maganzini, published in 1997 by Price Stern Sloan. But the answerer's point is that this sort of misunderstanding is widespread, so let's not single this out too much.
                    – JdeBP
                    Dec 27 '16 at 13:23








                  • 2




                    The book author was also wrong to claim that there is no pattern to the digits of $8/23$. Apparently she wasn't aware that the decimal representation has period 22.
                    – user1551
                    Dec 27 '16 at 14:32






                  • 11




                    $frac{8}{23}$ is now my favorite irrational.
                    – Jeppe Stig Nielsen
                    Dec 27 '16 at 16:28






                  • 5




                    @JeppeStigNielsen Mine too. And $pi$ is my favorite rational (since as well all know if you measure a circle's circumference and divide it by the diameter, you get $pi$.)
                    – mweiss
                    Dec 27 '16 at 17:22














                  25












                  25








                  25






                  One objection to thinking of real numbers as "infinite decimals" is that it lends itself to thinking of the distinction between rational and irrational numbers as being primarily about whether the decimals repeat or not. This in turn leads to some very problematic misunderstandings, such as the one in the image below:
                  enter image description here



                  Yes, you read it right: the author of this book thinks that $8/23$ is an irrational number, because there is no pattern to the digits.



                  Now it is easy to dismiss this as simple ignorance: of course the digits do repeat, but you have to go further out in the decimal sequence before this becomes visible. But once you notice this, you start to recognize all kinds of problems with the "non-repeating decimal" notion of irrational number: How can you ever tell, by looking at a decimal representation of a real number, whether it is rational or irrational? After all, the most we can ever see is finitely many digits; maybe the repeating portion starts after the part we are looking at? Or maybe what looks like a repeating decimal (say $0.6666dots$) turns out to have a 3 in the thirty-fifth decimal place, and thereafter is non-repeating?



                  Now obviously these problems can be circumvented by a more precise notion of "rational": A rational number is one that can be expressed as a ratio of integers, an irrational number is one that cannot be. But you would be surprised how resilient the misconception shown in the image above can be. Related errors are pervasive: I am sure I am not the only one who has seen students use a calculator to get a decimal approximation of some irrational number (say for example $log 2$) and several steps later use a "convert to fraction" command on their graphing calculator to express a string of digits as some close-but-not-equal rational number.



                  If you really want to get students away from these kinds of mistakes, at some point you have to provide them with a notion of "number" that is independent of the decimal representation of that number.






                  share|cite|improve this answer














                  One objection to thinking of real numbers as "infinite decimals" is that it lends itself to thinking of the distinction between rational and irrational numbers as being primarily about whether the decimals repeat or not. This in turn leads to some very problematic misunderstandings, such as the one in the image below:
                  enter image description here



                  Yes, you read it right: the author of this book thinks that $8/23$ is an irrational number, because there is no pattern to the digits.



                  Now it is easy to dismiss this as simple ignorance: of course the digits do repeat, but you have to go further out in the decimal sequence before this becomes visible. But once you notice this, you start to recognize all kinds of problems with the "non-repeating decimal" notion of irrational number: How can you ever tell, by looking at a decimal representation of a real number, whether it is rational or irrational? After all, the most we can ever see is finitely many digits; maybe the repeating portion starts after the part we are looking at? Or maybe what looks like a repeating decimal (say $0.6666dots$) turns out to have a 3 in the thirty-fifth decimal place, and thereafter is non-repeating?



                  Now obviously these problems can be circumvented by a more precise notion of "rational": A rational number is one that can be expressed as a ratio of integers, an irrational number is one that cannot be. But you would be surprised how resilient the misconception shown in the image above can be. Related errors are pervasive: I am sure I am not the only one who has seen students use a calculator to get a decimal approximation of some irrational number (say for example $log 2$) and several steps later use a "convert to fraction" command on their graphing calculator to express a string of digits as some close-but-not-equal rational number.



                  If you really want to get students away from these kinds of mistakes, at some point you have to provide them with a notion of "number" that is independent of the decimal representation of that number.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  answered Dec 27 '16 at 3:43


























                  community wiki





                  mweiss









                  • 12




                    And which (mathematically) idiot publisher printed this book?
                    – user21820
                    Dec 27 '16 at 9:27






                  • 3




                    It appears to be page 20 of Cool Math by Christy Maganzini, published in 1997 by Price Stern Sloan. But the answerer's point is that this sort of misunderstanding is widespread, so let's not single this out too much.
                    – JdeBP
                    Dec 27 '16 at 13:23








                  • 2




                    The book author was also wrong to claim that there is no pattern to the digits of $8/23$. Apparently she wasn't aware that the decimal representation has period 22.
                    – user1551
                    Dec 27 '16 at 14:32






                  • 11




                    $frac{8}{23}$ is now my favorite irrational.
                    – Jeppe Stig Nielsen
                    Dec 27 '16 at 16:28






                  • 5




                    @JeppeStigNielsen Mine too. And $pi$ is my favorite rational (since as well all know if you measure a circle's circumference and divide it by the diameter, you get $pi$.)
                    – mweiss
                    Dec 27 '16 at 17:22














                  • 12




                    And which (mathematically) idiot publisher printed this book?
                    – user21820
                    Dec 27 '16 at 9:27






                  • 3




                    It appears to be page 20 of Cool Math by Christy Maganzini, published in 1997 by Price Stern Sloan. But the answerer's point is that this sort of misunderstanding is widespread, so let's not single this out too much.
                    – JdeBP
                    Dec 27 '16 at 13:23








                  • 2




                    The book author was also wrong to claim that there is no pattern to the digits of $8/23$. Apparently she wasn't aware that the decimal representation has period 22.
                    – user1551
                    Dec 27 '16 at 14:32






                  • 11




                    $frac{8}{23}$ is now my favorite irrational.
                    – Jeppe Stig Nielsen
                    Dec 27 '16 at 16:28






                  • 5




                    @JeppeStigNielsen Mine too. And $pi$ is my favorite rational (since as well all know if you measure a circle's circumference and divide it by the diameter, you get $pi$.)
                    – mweiss
                    Dec 27 '16 at 17:22








                  12




                  12




                  And which (mathematically) idiot publisher printed this book?
                  – user21820
                  Dec 27 '16 at 9:27




                  And which (mathematically) idiot publisher printed this book?
                  – user21820
                  Dec 27 '16 at 9:27




                  3




                  3




                  It appears to be page 20 of Cool Math by Christy Maganzini, published in 1997 by Price Stern Sloan. But the answerer's point is that this sort of misunderstanding is widespread, so let's not single this out too much.
                  – JdeBP
                  Dec 27 '16 at 13:23






                  It appears to be page 20 of Cool Math by Christy Maganzini, published in 1997 by Price Stern Sloan. But the answerer's point is that this sort of misunderstanding is widespread, so let's not single this out too much.
                  – JdeBP
                  Dec 27 '16 at 13:23






                  2




                  2




                  The book author was also wrong to claim that there is no pattern to the digits of $8/23$. Apparently she wasn't aware that the decimal representation has period 22.
                  – user1551
                  Dec 27 '16 at 14:32




                  The book author was also wrong to claim that there is no pattern to the digits of $8/23$. Apparently she wasn't aware that the decimal representation has period 22.
                  – user1551
                  Dec 27 '16 at 14:32




                  11




                  11




                  $frac{8}{23}$ is now my favorite irrational.
                  – Jeppe Stig Nielsen
                  Dec 27 '16 at 16:28




                  $frac{8}{23}$ is now my favorite irrational.
                  – Jeppe Stig Nielsen
                  Dec 27 '16 at 16:28




                  5




                  5




                  @JeppeStigNielsen Mine too. And $pi$ is my favorite rational (since as well all know if you measure a circle's circumference and divide it by the diameter, you get $pi$.)
                  – mweiss
                  Dec 27 '16 at 17:22




                  @JeppeStigNielsen Mine too. And $pi$ is my favorite rational (since as well all know if you measure a circle's circumference and divide it by the diameter, you get $pi$.)
                  – mweiss
                  Dec 27 '16 at 17:22











                  16














                  For the same reason that it is incorrect to think of linear transformations as matrices, or to think of real $n$-dimensional vector spaces as just $mathbb{R}^{n}.$



                  What's so special about base $10$? Why not binary numbers or ternary numbers? In particular, ternary numbers would come in handy for understanding Cantor ternary sets. But apparently we should think in decimals?



                  This non-canonical choice is unnecessary, aesthetically displeasing, and distracts from certain intuitions to be gained from, say, the geometric view of the reals as points on a line. It's better to think of the real numbers as what they are: A system of things (what are those things? Equivalence classes of sequences of rationals? Points on a line? It doesn't matter) that have certain very nice properties. Just like vectors are just elements of vector spaces: Objects that obey certain rules. Shoving extra stuff in there distracts from the mathematics.






                  share|cite|improve this answer



















                  • 1




                    Good answer, it is just a question about how much the model or interpretation suits to particular part of the mathematical object.
                    – z100
                    Dec 27 '16 at 1:51






                  • 1




                    By coincidence, this is a hot question at the same time I answered this question.
                    – Will R
                    Dec 27 '16 at 1:58








                  • 2




                    I think in the first sentence it should be "to think of linear transformations as matrices"?
                    – Paŭlo Ebermann
                    Dec 27 '16 at 2:17






                  • 2




                    @PauloEbermann: What's the difference? Given a matrix, you don't know the transformation without being given the bases. Given a transformation, you can't write down a matrix unless you choose two bases. Am I mistaken?
                    – Will R
                    Dec 27 '16 at 2:20








                  • 1




                    I really wish either I had waited until you posted this answer, or that you had posted this answer an hour earlier than you did. I really like how you described this!
                    – user304051
                    Dec 27 '16 at 3:49
















                  16














                  For the same reason that it is incorrect to think of linear transformations as matrices, or to think of real $n$-dimensional vector spaces as just $mathbb{R}^{n}.$



                  What's so special about base $10$? Why not binary numbers or ternary numbers? In particular, ternary numbers would come in handy for understanding Cantor ternary sets. But apparently we should think in decimals?



                  This non-canonical choice is unnecessary, aesthetically displeasing, and distracts from certain intuitions to be gained from, say, the geometric view of the reals as points on a line. It's better to think of the real numbers as what they are: A system of things (what are those things? Equivalence classes of sequences of rationals? Points on a line? It doesn't matter) that have certain very nice properties. Just like vectors are just elements of vector spaces: Objects that obey certain rules. Shoving extra stuff in there distracts from the mathematics.






                  share|cite|improve this answer



















                  • 1




                    Good answer, it is just a question about how much the model or interpretation suits to particular part of the mathematical object.
                    – z100
                    Dec 27 '16 at 1:51






                  • 1




                    By coincidence, this is a hot question at the same time I answered this question.
                    – Will R
                    Dec 27 '16 at 1:58








                  • 2




                    I think in the first sentence it should be "to think of linear transformations as matrices"?
                    – Paŭlo Ebermann
                    Dec 27 '16 at 2:17






                  • 2




                    @PauloEbermann: What's the difference? Given a matrix, you don't know the transformation without being given the bases. Given a transformation, you can't write down a matrix unless you choose two bases. Am I mistaken?
                    – Will R
                    Dec 27 '16 at 2:20








                  • 1




                    I really wish either I had waited until you posted this answer, or that you had posted this answer an hour earlier than you did. I really like how you described this!
                    – user304051
                    Dec 27 '16 at 3:49














                  16












                  16








                  16






                  For the same reason that it is incorrect to think of linear transformations as matrices, or to think of real $n$-dimensional vector spaces as just $mathbb{R}^{n}.$



                  What's so special about base $10$? Why not binary numbers or ternary numbers? In particular, ternary numbers would come in handy for understanding Cantor ternary sets. But apparently we should think in decimals?



                  This non-canonical choice is unnecessary, aesthetically displeasing, and distracts from certain intuitions to be gained from, say, the geometric view of the reals as points on a line. It's better to think of the real numbers as what they are: A system of things (what are those things? Equivalence classes of sequences of rationals? Points on a line? It doesn't matter) that have certain very nice properties. Just like vectors are just elements of vector spaces: Objects that obey certain rules. Shoving extra stuff in there distracts from the mathematics.






                  share|cite|improve this answer














                  For the same reason that it is incorrect to think of linear transformations as matrices, or to think of real $n$-dimensional vector spaces as just $mathbb{R}^{n}.$



                  What's so special about base $10$? Why not binary numbers or ternary numbers? In particular, ternary numbers would come in handy for understanding Cantor ternary sets. But apparently we should think in decimals?



                  This non-canonical choice is unnecessary, aesthetically displeasing, and distracts from certain intuitions to be gained from, say, the geometric view of the reals as points on a line. It's better to think of the real numbers as what they are: A system of things (what are those things? Equivalence classes of sequences of rationals? Points on a line? It doesn't matter) that have certain very nice properties. Just like vectors are just elements of vector spaces: Objects that obey certain rules. Shoving extra stuff in there distracts from the mathematics.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Dec 27 '16 at 11:20


























                  community wiki





                  Will R









                  • 1




                    Good answer, it is just a question about how much the model or interpretation suits to particular part of the mathematical object.
                    – z100
                    Dec 27 '16 at 1:51






                  • 1




                    By coincidence, this is a hot question at the same time I answered this question.
                    – Will R
                    Dec 27 '16 at 1:58








                  • 2




                    I think in the first sentence it should be "to think of linear transformations as matrices"?
                    – Paŭlo Ebermann
                    Dec 27 '16 at 2:17






                  • 2




                    @PauloEbermann: What's the difference? Given a matrix, you don't know the transformation without being given the bases. Given a transformation, you can't write down a matrix unless you choose two bases. Am I mistaken?
                    – Will R
                    Dec 27 '16 at 2:20








                  • 1




                    I really wish either I had waited until you posted this answer, or that you had posted this answer an hour earlier than you did. I really like how you described this!
                    – user304051
                    Dec 27 '16 at 3:49














                  • 1




                    Good answer, it is just a question about how much the model or interpretation suits to particular part of the mathematical object.
                    – z100
                    Dec 27 '16 at 1:51






                  • 1




                    By coincidence, this is a hot question at the same time I answered this question.
                    – Will R
                    Dec 27 '16 at 1:58








                  • 2




                    I think in the first sentence it should be "to think of linear transformations as matrices"?
                    – Paŭlo Ebermann
                    Dec 27 '16 at 2:17






                  • 2




                    @PauloEbermann: What's the difference? Given a matrix, you don't know the transformation without being given the bases. Given a transformation, you can't write down a matrix unless you choose two bases. Am I mistaken?
                    – Will R
                    Dec 27 '16 at 2:20








                  • 1




                    I really wish either I had waited until you posted this answer, or that you had posted this answer an hour earlier than you did. I really like how you described this!
                    – user304051
                    Dec 27 '16 at 3:49








                  1




                  1




                  Good answer, it is just a question about how much the model or interpretation suits to particular part of the mathematical object.
                  – z100
                  Dec 27 '16 at 1:51




                  Good answer, it is just a question about how much the model or interpretation suits to particular part of the mathematical object.
                  – z100
                  Dec 27 '16 at 1:51




                  1




                  1




                  By coincidence, this is a hot question at the same time I answered this question.
                  – Will R
                  Dec 27 '16 at 1:58






                  By coincidence, this is a hot question at the same time I answered this question.
                  – Will R
                  Dec 27 '16 at 1:58






                  2




                  2




                  I think in the first sentence it should be "to think of linear transformations as matrices"?
                  – Paŭlo Ebermann
                  Dec 27 '16 at 2:17




                  I think in the first sentence it should be "to think of linear transformations as matrices"?
                  – Paŭlo Ebermann
                  Dec 27 '16 at 2:17




                  2




                  2




                  @PauloEbermann: What's the difference? Given a matrix, you don't know the transformation without being given the bases. Given a transformation, you can't write down a matrix unless you choose two bases. Am I mistaken?
                  – Will R
                  Dec 27 '16 at 2:20






                  @PauloEbermann: What's the difference? Given a matrix, you don't know the transformation without being given the bases. Given a transformation, you can't write down a matrix unless you choose two bases. Am I mistaken?
                  – Will R
                  Dec 27 '16 at 2:20






                  1




                  1




                  I really wish either I had waited until you posted this answer, or that you had posted this answer an hour earlier than you did. I really like how you described this!
                  – user304051
                  Dec 27 '16 at 3:49




                  I really wish either I had waited until you posted this answer, or that you had posted this answer an hour earlier than you did. I really like how you described this!
                  – user304051
                  Dec 27 '16 at 3:49











                  11














                  I think that teaching reals as infinite decimals in the first place is a mistake arising out of lack of a better description.



                  Here is how I present the problem of infinite decimals. How for example does one define the sum of two infinite decimals ? One is supposed to start to add decimals at the right and work to the left, as in elementary school. But there is no right most digit. So how can we define addition ? Well, OK, to salvage the idea one takes all truncations add them and then prove that they stabilize. That is we must prove that they converge. This can be done, but is is kind of messy. How do we then define multiplication ? Same thing but worse. Imagine proving the distributive law. In any case you have arrived by necessity at the concept of a Cauchy sequence.



                  The other reason is that infinite decimals lacks the geometric intuition necessary for understanding the topology of the reals.






                  share|cite|improve this answer



















                  • 2




                    I'm curious, and forgive me if this is a simple question....but couldn't a series representation be used in place of a truncated form, and allow showing convergence to be more straightforward? And if a series representation is possible, wouldn't the relative summation and product spaces be more or less straightforward to express? I guess this could just as easily lead to Cauchy sequence though...
                    – user304051
                    Dec 27 '16 at 1:15






                  • 1




                    But what would be a better description? As infinite decimals do seem a bad way -- look at how many people ask, over and over "does 0.999999... = 1?" "why does 0.999999... = 1?" "isn't 0.999999.... just a 'little bit less than' 1?" and so forth...
                    – The_Sympathizer
                    Dec 27 '16 at 1:29






                  • 1




                    @floorcat A series is the same thing as an infinite decimal.
                    – Rene Schipperus
                    Dec 27 '16 at 1:54






                  • 1




                    I was confused because most of the comments/answers use the actual non-terminating decimal as opposed to what I consider a series representation form, e.g. summation or product space notation.
                    – user304051
                    Dec 27 '16 at 2:01
















                  11














                  I think that teaching reals as infinite decimals in the first place is a mistake arising out of lack of a better description.



                  Here is how I present the problem of infinite decimals. How for example does one define the sum of two infinite decimals ? One is supposed to start to add decimals at the right and work to the left, as in elementary school. But there is no right most digit. So how can we define addition ? Well, OK, to salvage the idea one takes all truncations add them and then prove that they stabilize. That is we must prove that they converge. This can be done, but is is kind of messy. How do we then define multiplication ? Same thing but worse. Imagine proving the distributive law. In any case you have arrived by necessity at the concept of a Cauchy sequence.



                  The other reason is that infinite decimals lacks the geometric intuition necessary for understanding the topology of the reals.






                  share|cite|improve this answer



















                  • 2




                    I'm curious, and forgive me if this is a simple question....but couldn't a series representation be used in place of a truncated form, and allow showing convergence to be more straightforward? And if a series representation is possible, wouldn't the relative summation and product spaces be more or less straightforward to express? I guess this could just as easily lead to Cauchy sequence though...
                    – user304051
                    Dec 27 '16 at 1:15






                  • 1




                    But what would be a better description? As infinite decimals do seem a bad way -- look at how many people ask, over and over "does 0.999999... = 1?" "why does 0.999999... = 1?" "isn't 0.999999.... just a 'little bit less than' 1?" and so forth...
                    – The_Sympathizer
                    Dec 27 '16 at 1:29






                  • 1




                    @floorcat A series is the same thing as an infinite decimal.
                    – Rene Schipperus
                    Dec 27 '16 at 1:54






                  • 1




                    I was confused because most of the comments/answers use the actual non-terminating decimal as opposed to what I consider a series representation form, e.g. summation or product space notation.
                    – user304051
                    Dec 27 '16 at 2:01














                  11












                  11








                  11






                  I think that teaching reals as infinite decimals in the first place is a mistake arising out of lack of a better description.



                  Here is how I present the problem of infinite decimals. How for example does one define the sum of two infinite decimals ? One is supposed to start to add decimals at the right and work to the left, as in elementary school. But there is no right most digit. So how can we define addition ? Well, OK, to salvage the idea one takes all truncations add them and then prove that they stabilize. That is we must prove that they converge. This can be done, but is is kind of messy. How do we then define multiplication ? Same thing but worse. Imagine proving the distributive law. In any case you have arrived by necessity at the concept of a Cauchy sequence.



                  The other reason is that infinite decimals lacks the geometric intuition necessary for understanding the topology of the reals.






                  share|cite|improve this answer














                  I think that teaching reals as infinite decimals in the first place is a mistake arising out of lack of a better description.



                  Here is how I present the problem of infinite decimals. How for example does one define the sum of two infinite decimals ? One is supposed to start to add decimals at the right and work to the left, as in elementary school. But there is no right most digit. So how can we define addition ? Well, OK, to salvage the idea one takes all truncations add them and then prove that they stabilize. That is we must prove that they converge. This can be done, but is is kind of messy. How do we then define multiplication ? Same thing but worse. Imagine proving the distributive law. In any case you have arrived by necessity at the concept of a Cauchy sequence.



                  The other reason is that infinite decimals lacks the geometric intuition necessary for understanding the topology of the reals.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  answered Dec 27 '16 at 0:19


























                  community wiki





                  Rene Schipperus









                  • 2




                    I'm curious, and forgive me if this is a simple question....but couldn't a series representation be used in place of a truncated form, and allow showing convergence to be more straightforward? And if a series representation is possible, wouldn't the relative summation and product spaces be more or less straightforward to express? I guess this could just as easily lead to Cauchy sequence though...
                    – user304051
                    Dec 27 '16 at 1:15






                  • 1




                    But what would be a better description? As infinite decimals do seem a bad way -- look at how many people ask, over and over "does 0.999999... = 1?" "why does 0.999999... = 1?" "isn't 0.999999.... just a 'little bit less than' 1?" and so forth...
                    – The_Sympathizer
                    Dec 27 '16 at 1:29






                  • 1




                    @floorcat A series is the same thing as an infinite decimal.
                    – Rene Schipperus
                    Dec 27 '16 at 1:54






                  • 1




                    I was confused because most of the comments/answers use the actual non-terminating decimal as opposed to what I consider a series representation form, e.g. summation or product space notation.
                    – user304051
                    Dec 27 '16 at 2:01














                  • 2




                    I'm curious, and forgive me if this is a simple question....but couldn't a series representation be used in place of a truncated form, and allow showing convergence to be more straightforward? And if a series representation is possible, wouldn't the relative summation and product spaces be more or less straightforward to express? I guess this could just as easily lead to Cauchy sequence though...
                    – user304051
                    Dec 27 '16 at 1:15






                  • 1




                    But what would be a better description? As infinite decimals do seem a bad way -- look at how many people ask, over and over "does 0.999999... = 1?" "why does 0.999999... = 1?" "isn't 0.999999.... just a 'little bit less than' 1?" and so forth...
                    – The_Sympathizer
                    Dec 27 '16 at 1:29






                  • 1




                    @floorcat A series is the same thing as an infinite decimal.
                    – Rene Schipperus
                    Dec 27 '16 at 1:54






                  • 1




                    I was confused because most of the comments/answers use the actual non-terminating decimal as opposed to what I consider a series representation form, e.g. summation or product space notation.
                    – user304051
                    Dec 27 '16 at 2:01








                  2




                  2




                  I'm curious, and forgive me if this is a simple question....but couldn't a series representation be used in place of a truncated form, and allow showing convergence to be more straightforward? And if a series representation is possible, wouldn't the relative summation and product spaces be more or less straightforward to express? I guess this could just as easily lead to Cauchy sequence though...
                  – user304051
                  Dec 27 '16 at 1:15




                  I'm curious, and forgive me if this is a simple question....but couldn't a series representation be used in place of a truncated form, and allow showing convergence to be more straightforward? And if a series representation is possible, wouldn't the relative summation and product spaces be more or less straightforward to express? I guess this could just as easily lead to Cauchy sequence though...
                  – user304051
                  Dec 27 '16 at 1:15




                  1




                  1




                  But what would be a better description? As infinite decimals do seem a bad way -- look at how many people ask, over and over "does 0.999999... = 1?" "why does 0.999999... = 1?" "isn't 0.999999.... just a 'little bit less than' 1?" and so forth...
                  – The_Sympathizer
                  Dec 27 '16 at 1:29




                  But what would be a better description? As infinite decimals do seem a bad way -- look at how many people ask, over and over "does 0.999999... = 1?" "why does 0.999999... = 1?" "isn't 0.999999.... just a 'little bit less than' 1?" and so forth...
                  – The_Sympathizer
                  Dec 27 '16 at 1:29




                  1




                  1




                  @floorcat A series is the same thing as an infinite decimal.
                  – Rene Schipperus
                  Dec 27 '16 at 1:54




                  @floorcat A series is the same thing as an infinite decimal.
                  – Rene Schipperus
                  Dec 27 '16 at 1:54




                  1




                  1




                  I was confused because most of the comments/answers use the actual non-terminating decimal as opposed to what I consider a series representation form, e.g. summation or product space notation.
                  – user304051
                  Dec 27 '16 at 2:01




                  I was confused because most of the comments/answers use the actual non-terminating decimal as opposed to what I consider a series representation form, e.g. summation or product space notation.
                  – user304051
                  Dec 27 '16 at 2:01











                  9














                  Decimal notation for general real numbers is not universally intuitive. It has a number of problems:




                  • Arithmetic with nonterminating decimals has unfamiliar complications, because people are used to doing arithmetic from the right end.

                  • Some people have great difficulty accepting that representation is not unique

                  • Some people don't grasp the infinite nature, and instead imagine a decimal as simply having a large but finite number of digits — sometimes with the belief that decimals are inherently approximate and cannot represent a number exactly.

                  • Some people fail to grasp the infinite nature in a different way, only being able to conceptualize a sequence of terminating decimals


                  It even has severe philosophical problems; e.g. they are ill-suited for various constructive approaches to mathematics. You can't compute even the first digit of $.333ldots + .666ldots$ unless you can prove that a carry does or does not propagate in from the right. (or you can prove it is a special case, such as the numbers actually being $1/3 + 2/3$, rather than some unknown sequence you actually have to generate to find its digits)



                  There are pedagogical problems as well; the theory behind decimals arises from that of infinite summations, but calculus and analysis courses tend to prefer to start with limits, continuity, and similar topological notions.





                  Incidentally, I believe I have seen at least one text whose proof that a complete ordered field exists is by showing the arithmetic on decimal numbers is such.






                  share|cite|improve this answer




























                    9














                    Decimal notation for general real numbers is not universally intuitive. It has a number of problems:




                    • Arithmetic with nonterminating decimals has unfamiliar complications, because people are used to doing arithmetic from the right end.

                    • Some people have great difficulty accepting that representation is not unique

                    • Some people don't grasp the infinite nature, and instead imagine a decimal as simply having a large but finite number of digits — sometimes with the belief that decimals are inherently approximate and cannot represent a number exactly.

                    • Some people fail to grasp the infinite nature in a different way, only being able to conceptualize a sequence of terminating decimals


                    It even has severe philosophical problems; e.g. they are ill-suited for various constructive approaches to mathematics. You can't compute even the first digit of $.333ldots + .666ldots$ unless you can prove that a carry does or does not propagate in from the right. (or you can prove it is a special case, such as the numbers actually being $1/3 + 2/3$, rather than some unknown sequence you actually have to generate to find its digits)



                    There are pedagogical problems as well; the theory behind decimals arises from that of infinite summations, but calculus and analysis courses tend to prefer to start with limits, continuity, and similar topological notions.





                    Incidentally, I believe I have seen at least one text whose proof that a complete ordered field exists is by showing the arithmetic on decimal numbers is such.






                    share|cite|improve this answer


























                      9












                      9








                      9






                      Decimal notation for general real numbers is not universally intuitive. It has a number of problems:




                      • Arithmetic with nonterminating decimals has unfamiliar complications, because people are used to doing arithmetic from the right end.

                      • Some people have great difficulty accepting that representation is not unique

                      • Some people don't grasp the infinite nature, and instead imagine a decimal as simply having a large but finite number of digits — sometimes with the belief that decimals are inherently approximate and cannot represent a number exactly.

                      • Some people fail to grasp the infinite nature in a different way, only being able to conceptualize a sequence of terminating decimals


                      It even has severe philosophical problems; e.g. they are ill-suited for various constructive approaches to mathematics. You can't compute even the first digit of $.333ldots + .666ldots$ unless you can prove that a carry does or does not propagate in from the right. (or you can prove it is a special case, such as the numbers actually being $1/3 + 2/3$, rather than some unknown sequence you actually have to generate to find its digits)



                      There are pedagogical problems as well; the theory behind decimals arises from that of infinite summations, but calculus and analysis courses tend to prefer to start with limits, continuity, and similar topological notions.





                      Incidentally, I believe I have seen at least one text whose proof that a complete ordered field exists is by showing the arithmetic on decimal numbers is such.






                      share|cite|improve this answer














                      Decimal notation for general real numbers is not universally intuitive. It has a number of problems:




                      • Arithmetic with nonterminating decimals has unfamiliar complications, because people are used to doing arithmetic from the right end.

                      • Some people have great difficulty accepting that representation is not unique

                      • Some people don't grasp the infinite nature, and instead imagine a decimal as simply having a large but finite number of digits — sometimes with the belief that decimals are inherently approximate and cannot represent a number exactly.

                      • Some people fail to grasp the infinite nature in a different way, only being able to conceptualize a sequence of terminating decimals


                      It even has severe philosophical problems; e.g. they are ill-suited for various constructive approaches to mathematics. You can't compute even the first digit of $.333ldots + .666ldots$ unless you can prove that a carry does or does not propagate in from the right. (or you can prove it is a special case, such as the numbers actually being $1/3 + 2/3$, rather than some unknown sequence you actually have to generate to find its digits)



                      There are pedagogical problems as well; the theory behind decimals arises from that of infinite summations, but calculus and analysis courses tend to prefer to start with limits, continuity, and similar topological notions.





                      Incidentally, I believe I have seen at least one text whose proof that a complete ordered field exists is by showing the arithmetic on decimal numbers is such.







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      answered Dec 27 '16 at 2:28


























                      community wiki





                      Hurkyl
























                          6














                          The existing answers are great but have not addressed the following interesting part of the question:




                          Isn't it good to see how the conventional treatment is connected to, and grows out of, more 'naive' ideas?




                          Namely, I shall show how the idea of decimal representation leads naturally and inexorably to the idea of equivalence classes of (regular) Cauchy sequences.



                          First, one needs to be extremely careful because decimal representations of reals are not unique, and it is a chore even to define the basic arithmetic operations on decimals. For instance, to test whether two decimals are equal is no longer obvious, and simply defining away the problem (like stipulating that decimals cannot have endless repeating '9's) does not help! Surprised? Consider $3 times 0.333overline{3}$. Ordinary multiplication yields $0.999overline{9}$, so the easiest way to resolve this issue is to invoke a canonization at the end of each arithmetic operation. Let me spell out this process in full: If the result ends with "$overline{9}$", change all those digits to "$overline{0}$" and add $1$ to the preceding digit, as usual carrying over if needed. Not to forget the numerous cases due to the sign (positive, negative, zero).



                          But look! The very idea of canonization corresponds to the idea of picking out representatives under an equivalence relation over the decimals. But then a natural question arises: Why does this equivalence relation seem so weird? It considers $0.999overline{9}$ equivalent to $1.000overline{0}$, but why not any other 'endings'? Interestingly, we can understand it better by using an alternative definition that decimals $x,y$ are equivalent iff $x-y$ using ordinary subtraction (without canonization) is $0.overline{0}$. Addition and subtraction here must be defined for non-negative decimals such that we always subtract the smaller from the larger before attaching the correct sign, and then defined for other signs by the usual cases. This definition shows concretely that except for the equivalences involving "$overline{9}$" and "$overline{0}$" described above, any other pair of decimals have a nonzero difference.



                          But note that in the subtraction algorithm we needed to know which decimal is larger. We could define non-strictly larger by the usual comparison algorithm, but what does this comparison really mean? Actually this is easily answered if one thinks carefully about the meaning of decimals in the first place.




                          • "$3.cdots$" means some amount in the range from $3$ to $4$.


                          • "$3.1cdots$" means some amount in the range from $3.1$ to $3.2$.


                          • "$3.14cdots$" means some amount in the range from $3.14$ to $3.15$.



                          In short:




                          • $3.cdots in [3,4]$.


                          • $3.1cdots in [3.1,3.2]$.


                          • $3.14cdots in [3.14,3.15]$.



                          This is precisely what guides us to invent the comparison algorithm. Also, notice that each decimal, being a sequence of digits, corresponds exactly to a sequence of intervals that narrows with each step, such that the $k$-th interval in the sequence has width $10^{-k}$. This very nicely corresponds to one type of (regular) Cauchy sequences!



                          Observe now that the definition of Cauchy sequence is neater and easier to use simply because it discards the implementation details, which in the case of decimals includes the carry-over parts of the algorithms and the specific base-10 format and the strong convergence guarantee. Otherwise there really is no difference between decimals and Cauchy sequences. Finally, using the equivalence classes as real numbers is just an alternative to picking a canonical representative from each class.






                          share|cite|improve this answer




























                            6














                            The existing answers are great but have not addressed the following interesting part of the question:




                            Isn't it good to see how the conventional treatment is connected to, and grows out of, more 'naive' ideas?




                            Namely, I shall show how the idea of decimal representation leads naturally and inexorably to the idea of equivalence classes of (regular) Cauchy sequences.



                            First, one needs to be extremely careful because decimal representations of reals are not unique, and it is a chore even to define the basic arithmetic operations on decimals. For instance, to test whether two decimals are equal is no longer obvious, and simply defining away the problem (like stipulating that decimals cannot have endless repeating '9's) does not help! Surprised? Consider $3 times 0.333overline{3}$. Ordinary multiplication yields $0.999overline{9}$, so the easiest way to resolve this issue is to invoke a canonization at the end of each arithmetic operation. Let me spell out this process in full: If the result ends with "$overline{9}$", change all those digits to "$overline{0}$" and add $1$ to the preceding digit, as usual carrying over if needed. Not to forget the numerous cases due to the sign (positive, negative, zero).



                            But look! The very idea of canonization corresponds to the idea of picking out representatives under an equivalence relation over the decimals. But then a natural question arises: Why does this equivalence relation seem so weird? It considers $0.999overline{9}$ equivalent to $1.000overline{0}$, but why not any other 'endings'? Interestingly, we can understand it better by using an alternative definition that decimals $x,y$ are equivalent iff $x-y$ using ordinary subtraction (without canonization) is $0.overline{0}$. Addition and subtraction here must be defined for non-negative decimals such that we always subtract the smaller from the larger before attaching the correct sign, and then defined for other signs by the usual cases. This definition shows concretely that except for the equivalences involving "$overline{9}$" and "$overline{0}$" described above, any other pair of decimals have a nonzero difference.



                            But note that in the subtraction algorithm we needed to know which decimal is larger. We could define non-strictly larger by the usual comparison algorithm, but what does this comparison really mean? Actually this is easily answered if one thinks carefully about the meaning of decimals in the first place.




                            • "$3.cdots$" means some amount in the range from $3$ to $4$.


                            • "$3.1cdots$" means some amount in the range from $3.1$ to $3.2$.


                            • "$3.14cdots$" means some amount in the range from $3.14$ to $3.15$.



                            In short:




                            • $3.cdots in [3,4]$.


                            • $3.1cdots in [3.1,3.2]$.


                            • $3.14cdots in [3.14,3.15]$.



                            This is precisely what guides us to invent the comparison algorithm. Also, notice that each decimal, being a sequence of digits, corresponds exactly to a sequence of intervals that narrows with each step, such that the $k$-th interval in the sequence has width $10^{-k}$. This very nicely corresponds to one type of (regular) Cauchy sequences!



                            Observe now that the definition of Cauchy sequence is neater and easier to use simply because it discards the implementation details, which in the case of decimals includes the carry-over parts of the algorithms and the specific base-10 format and the strong convergence guarantee. Otherwise there really is no difference between decimals and Cauchy sequences. Finally, using the equivalence classes as real numbers is just an alternative to picking a canonical representative from each class.






                            share|cite|improve this answer


























                              6












                              6








                              6






                              The existing answers are great but have not addressed the following interesting part of the question:




                              Isn't it good to see how the conventional treatment is connected to, and grows out of, more 'naive' ideas?




                              Namely, I shall show how the idea of decimal representation leads naturally and inexorably to the idea of equivalence classes of (regular) Cauchy sequences.



                              First, one needs to be extremely careful because decimal representations of reals are not unique, and it is a chore even to define the basic arithmetic operations on decimals. For instance, to test whether two decimals are equal is no longer obvious, and simply defining away the problem (like stipulating that decimals cannot have endless repeating '9's) does not help! Surprised? Consider $3 times 0.333overline{3}$. Ordinary multiplication yields $0.999overline{9}$, so the easiest way to resolve this issue is to invoke a canonization at the end of each arithmetic operation. Let me spell out this process in full: If the result ends with "$overline{9}$", change all those digits to "$overline{0}$" and add $1$ to the preceding digit, as usual carrying over if needed. Not to forget the numerous cases due to the sign (positive, negative, zero).



                              But look! The very idea of canonization corresponds to the idea of picking out representatives under an equivalence relation over the decimals. But then a natural question arises: Why does this equivalence relation seem so weird? It considers $0.999overline{9}$ equivalent to $1.000overline{0}$, but why not any other 'endings'? Interestingly, we can understand it better by using an alternative definition that decimals $x,y$ are equivalent iff $x-y$ using ordinary subtraction (without canonization) is $0.overline{0}$. Addition and subtraction here must be defined for non-negative decimals such that we always subtract the smaller from the larger before attaching the correct sign, and then defined for other signs by the usual cases. This definition shows concretely that except for the equivalences involving "$overline{9}$" and "$overline{0}$" described above, any other pair of decimals have a nonzero difference.



                              But note that in the subtraction algorithm we needed to know which decimal is larger. We could define non-strictly larger by the usual comparison algorithm, but what does this comparison really mean? Actually this is easily answered if one thinks carefully about the meaning of decimals in the first place.




                              • "$3.cdots$" means some amount in the range from $3$ to $4$.


                              • "$3.1cdots$" means some amount in the range from $3.1$ to $3.2$.


                              • "$3.14cdots$" means some amount in the range from $3.14$ to $3.15$.



                              In short:




                              • $3.cdots in [3,4]$.


                              • $3.1cdots in [3.1,3.2]$.


                              • $3.14cdots in [3.14,3.15]$.



                              This is precisely what guides us to invent the comparison algorithm. Also, notice that each decimal, being a sequence of digits, corresponds exactly to a sequence of intervals that narrows with each step, such that the $k$-th interval in the sequence has width $10^{-k}$. This very nicely corresponds to one type of (regular) Cauchy sequences!



                              Observe now that the definition of Cauchy sequence is neater and easier to use simply because it discards the implementation details, which in the case of decimals includes the carry-over parts of the algorithms and the specific base-10 format and the strong convergence guarantee. Otherwise there really is no difference between decimals and Cauchy sequences. Finally, using the equivalence classes as real numbers is just an alternative to picking a canonical representative from each class.






                              share|cite|improve this answer














                              The existing answers are great but have not addressed the following interesting part of the question:




                              Isn't it good to see how the conventional treatment is connected to, and grows out of, more 'naive' ideas?




                              Namely, I shall show how the idea of decimal representation leads naturally and inexorably to the idea of equivalence classes of (regular) Cauchy sequences.



                              First, one needs to be extremely careful because decimal representations of reals are not unique, and it is a chore even to define the basic arithmetic operations on decimals. For instance, to test whether two decimals are equal is no longer obvious, and simply defining away the problem (like stipulating that decimals cannot have endless repeating '9's) does not help! Surprised? Consider $3 times 0.333overline{3}$. Ordinary multiplication yields $0.999overline{9}$, so the easiest way to resolve this issue is to invoke a canonization at the end of each arithmetic operation. Let me spell out this process in full: If the result ends with "$overline{9}$", change all those digits to "$overline{0}$" and add $1$ to the preceding digit, as usual carrying over if needed. Not to forget the numerous cases due to the sign (positive, negative, zero).



                              But look! The very idea of canonization corresponds to the idea of picking out representatives under an equivalence relation over the decimals. But then a natural question arises: Why does this equivalence relation seem so weird? It considers $0.999overline{9}$ equivalent to $1.000overline{0}$, but why not any other 'endings'? Interestingly, we can understand it better by using an alternative definition that decimals $x,y$ are equivalent iff $x-y$ using ordinary subtraction (without canonization) is $0.overline{0}$. Addition and subtraction here must be defined for non-negative decimals such that we always subtract the smaller from the larger before attaching the correct sign, and then defined for other signs by the usual cases. This definition shows concretely that except for the equivalences involving "$overline{9}$" and "$overline{0}$" described above, any other pair of decimals have a nonzero difference.



                              But note that in the subtraction algorithm we needed to know which decimal is larger. We could define non-strictly larger by the usual comparison algorithm, but what does this comparison really mean? Actually this is easily answered if one thinks carefully about the meaning of decimals in the first place.




                              • "$3.cdots$" means some amount in the range from $3$ to $4$.


                              • "$3.1cdots$" means some amount in the range from $3.1$ to $3.2$.


                              • "$3.14cdots$" means some amount in the range from $3.14$ to $3.15$.



                              In short:




                              • $3.cdots in [3,4]$.


                              • $3.1cdots in [3.1,3.2]$.


                              • $3.14cdots in [3.14,3.15]$.



                              This is precisely what guides us to invent the comparison algorithm. Also, notice that each decimal, being a sequence of digits, corresponds exactly to a sequence of intervals that narrows with each step, such that the $k$-th interval in the sequence has width $10^{-k}$. This very nicely corresponds to one type of (regular) Cauchy sequences!



                              Observe now that the definition of Cauchy sequence is neater and easier to use simply because it discards the implementation details, which in the case of decimals includes the carry-over parts of the algorithms and the specific base-10 format and the strong convergence guarantee. Otherwise there really is no difference between decimals and Cauchy sequences. Finally, using the equivalence classes as real numbers is just an alternative to picking a canonical representative from each class.







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              answered Dec 27 '16 at 10:31


























                              community wiki





                              user21820
























                                  5














                                  The infinite decimal interpretation of $mathbb{R}$ leads to problems:
                                  $$ 0.49999dots = 0.5
                                  $$
                                  If you are trying to find a bijection $left[0,1right[ to left[0,1right[ times left[0,1right[$, you might try:
                                  $$ 0.abababababdots mapsto (0.aaaaaaadots,0.bbbbbbbdots)
                                  $$
                                  which does not work due to the existence of $0.4999dots$.






                                  share|cite|improve this answer



















                                  • 6




                                    How is this a "problem", exactly?
                                    – Eric Wofsey
                                    Dec 27 '16 at 0:04






                                  • 9




                                    @EricWofsey It gives the incorrect intuition that $0.4999dots$ is different from $0.5$.
                                    – Henricus V.
                                    Dec 27 '16 at 0:05






                                  • 1




                                    Intuition is neither fixed nor objective. It's certainly not intuitive that an operation can be noncommutative, for instance, until one has learned enough that it becomes intuitive. Further, in this particular case, it arises from incorrect understanding, so of course there's a problem, but it's not caused by what you imply it to be.
                                    – Nij
                                    Dec 27 '16 at 2:43






                                  • 3




                                    I think this may be one of the key reasons why you don't think of reals as infinite decimals. Infinite decimals have more than one representation for any number (such as .4999... and .5 given here). This can lead to false proofs. For example, if you need to pick a number smaller than .5, it is not immediately obvious that .499... is not such a number. By the time you understand enough to understand why this is the case, you might as well have learned to think of the reals properly. Thinking of them as infinite decimals didn't help.
                                    – Cort Ammon
                                    Dec 27 '16 at 3:28






                                  • 2




                                    this distinction is necessary when considering the proof that the real numbers between 0 and 1 are uncountable.
                                    – robert bristow-johnson
                                    Dec 27 '16 at 5:27
















                                  5














                                  The infinite decimal interpretation of $mathbb{R}$ leads to problems:
                                  $$ 0.49999dots = 0.5
                                  $$
                                  If you are trying to find a bijection $left[0,1right[ to left[0,1right[ times left[0,1right[$, you might try:
                                  $$ 0.abababababdots mapsto (0.aaaaaaadots,0.bbbbbbbdots)
                                  $$
                                  which does not work due to the existence of $0.4999dots$.






                                  share|cite|improve this answer



















                                  • 6




                                    How is this a "problem", exactly?
                                    – Eric Wofsey
                                    Dec 27 '16 at 0:04






                                  • 9




                                    @EricWofsey It gives the incorrect intuition that $0.4999dots$ is different from $0.5$.
                                    – Henricus V.
                                    Dec 27 '16 at 0:05






                                  • 1




                                    Intuition is neither fixed nor objective. It's certainly not intuitive that an operation can be noncommutative, for instance, until one has learned enough that it becomes intuitive. Further, in this particular case, it arises from incorrect understanding, so of course there's a problem, but it's not caused by what you imply it to be.
                                    – Nij
                                    Dec 27 '16 at 2:43






                                  • 3




                                    I think this may be one of the key reasons why you don't think of reals as infinite decimals. Infinite decimals have more than one representation for any number (such as .4999... and .5 given here). This can lead to false proofs. For example, if you need to pick a number smaller than .5, it is not immediately obvious that .499... is not such a number. By the time you understand enough to understand why this is the case, you might as well have learned to think of the reals properly. Thinking of them as infinite decimals didn't help.
                                    – Cort Ammon
                                    Dec 27 '16 at 3:28






                                  • 2




                                    this distinction is necessary when considering the proof that the real numbers between 0 and 1 are uncountable.
                                    – robert bristow-johnson
                                    Dec 27 '16 at 5:27














                                  5












                                  5








                                  5






                                  The infinite decimal interpretation of $mathbb{R}$ leads to problems:
                                  $$ 0.49999dots = 0.5
                                  $$
                                  If you are trying to find a bijection $left[0,1right[ to left[0,1right[ times left[0,1right[$, you might try:
                                  $$ 0.abababababdots mapsto (0.aaaaaaadots,0.bbbbbbbdots)
                                  $$
                                  which does not work due to the existence of $0.4999dots$.






                                  share|cite|improve this answer














                                  The infinite decimal interpretation of $mathbb{R}$ leads to problems:
                                  $$ 0.49999dots = 0.5
                                  $$
                                  If you are trying to find a bijection $left[0,1right[ to left[0,1right[ times left[0,1right[$, you might try:
                                  $$ 0.abababababdots mapsto (0.aaaaaaadots,0.bbbbbbbdots)
                                  $$
                                  which does not work due to the existence of $0.4999dots$.







                                  share|cite|improve this answer














                                  share|cite|improve this answer



                                  share|cite|improve this answer








                                  answered Dec 26 '16 at 23:59


























                                  community wiki





                                  Henricus V.









                                  • 6




                                    How is this a "problem", exactly?
                                    – Eric Wofsey
                                    Dec 27 '16 at 0:04






                                  • 9




                                    @EricWofsey It gives the incorrect intuition that $0.4999dots$ is different from $0.5$.
                                    – Henricus V.
                                    Dec 27 '16 at 0:05






                                  • 1




                                    Intuition is neither fixed nor objective. It's certainly not intuitive that an operation can be noncommutative, for instance, until one has learned enough that it becomes intuitive. Further, in this particular case, it arises from incorrect understanding, so of course there's a problem, but it's not caused by what you imply it to be.
                                    – Nij
                                    Dec 27 '16 at 2:43






                                  • 3




                                    I think this may be one of the key reasons why you don't think of reals as infinite decimals. Infinite decimals have more than one representation for any number (such as .4999... and .5 given here). This can lead to false proofs. For example, if you need to pick a number smaller than .5, it is not immediately obvious that .499... is not such a number. By the time you understand enough to understand why this is the case, you might as well have learned to think of the reals properly. Thinking of them as infinite decimals didn't help.
                                    – Cort Ammon
                                    Dec 27 '16 at 3:28






                                  • 2




                                    this distinction is necessary when considering the proof that the real numbers between 0 and 1 are uncountable.
                                    – robert bristow-johnson
                                    Dec 27 '16 at 5:27














                                  • 6




                                    How is this a "problem", exactly?
                                    – Eric Wofsey
                                    Dec 27 '16 at 0:04






                                  • 9




                                    @EricWofsey It gives the incorrect intuition that $0.4999dots$ is different from $0.5$.
                                    – Henricus V.
                                    Dec 27 '16 at 0:05






                                  • 1




                                    Intuition is neither fixed nor objective. It's certainly not intuitive that an operation can be noncommutative, for instance, until one has learned enough that it becomes intuitive. Further, in this particular case, it arises from incorrect understanding, so of course there's a problem, but it's not caused by what you imply it to be.
                                    – Nij
                                    Dec 27 '16 at 2:43






                                  • 3




                                    I think this may be one of the key reasons why you don't think of reals as infinite decimals. Infinite decimals have more than one representation for any number (such as .4999... and .5 given here). This can lead to false proofs. For example, if you need to pick a number smaller than .5, it is not immediately obvious that .499... is not such a number. By the time you understand enough to understand why this is the case, you might as well have learned to think of the reals properly. Thinking of them as infinite decimals didn't help.
                                    – Cort Ammon
                                    Dec 27 '16 at 3:28






                                  • 2




                                    this distinction is necessary when considering the proof that the real numbers between 0 and 1 are uncountable.
                                    – robert bristow-johnson
                                    Dec 27 '16 at 5:27








                                  6




                                  6




                                  How is this a "problem", exactly?
                                  – Eric Wofsey
                                  Dec 27 '16 at 0:04




                                  How is this a "problem", exactly?
                                  – Eric Wofsey
                                  Dec 27 '16 at 0:04




                                  9




                                  9




                                  @EricWofsey It gives the incorrect intuition that $0.4999dots$ is different from $0.5$.
                                  – Henricus V.
                                  Dec 27 '16 at 0:05




                                  @EricWofsey It gives the incorrect intuition that $0.4999dots$ is different from $0.5$.
                                  – Henricus V.
                                  Dec 27 '16 at 0:05




                                  1




                                  1




                                  Intuition is neither fixed nor objective. It's certainly not intuitive that an operation can be noncommutative, for instance, until one has learned enough that it becomes intuitive. Further, in this particular case, it arises from incorrect understanding, so of course there's a problem, but it's not caused by what you imply it to be.
                                  – Nij
                                  Dec 27 '16 at 2:43




                                  Intuition is neither fixed nor objective. It's certainly not intuitive that an operation can be noncommutative, for instance, until one has learned enough that it becomes intuitive. Further, in this particular case, it arises from incorrect understanding, so of course there's a problem, but it's not caused by what you imply it to be.
                                  – Nij
                                  Dec 27 '16 at 2:43




                                  3




                                  3




                                  I think this may be one of the key reasons why you don't think of reals as infinite decimals. Infinite decimals have more than one representation for any number (such as .4999... and .5 given here). This can lead to false proofs. For example, if you need to pick a number smaller than .5, it is not immediately obvious that .499... is not such a number. By the time you understand enough to understand why this is the case, you might as well have learned to think of the reals properly. Thinking of them as infinite decimals didn't help.
                                  – Cort Ammon
                                  Dec 27 '16 at 3:28




                                  I think this may be one of the key reasons why you don't think of reals as infinite decimals. Infinite decimals have more than one representation for any number (such as .4999... and .5 given here). This can lead to false proofs. For example, if you need to pick a number smaller than .5, it is not immediately obvious that .499... is not such a number. By the time you understand enough to understand why this is the case, you might as well have learned to think of the reals properly. Thinking of them as infinite decimals didn't help.
                                  – Cort Ammon
                                  Dec 27 '16 at 3:28




                                  2




                                  2




                                  this distinction is necessary when considering the proof that the real numbers between 0 and 1 are uncountable.
                                  – robert bristow-johnson
                                  Dec 27 '16 at 5:27




                                  this distinction is necessary when considering the proof that the real numbers between 0 and 1 are uncountable.
                                  – robert bristow-johnson
                                  Dec 27 '16 at 5:27











                                  2














                                  One answer to




                                  What is so wrong with thinking of real numbers as infinite decimals?




                                  is that it's somewhat old fashioned. Another is that it doesn't work really well in more advanced courses in analysis. But Courant's calculus, one of the best texts ever, shows this in Section 2 of the first chapter:



                                  enter image description here



                                  That's a screenshot from page 8; you can see it at



                                  https://archive.org/stream/DifferentialIntegralCalculusVolI/Courant-DifferentialIntegralCalculusVolI#page/n23/mode/2up






                                  share|cite|improve this answer



















                                  • 1




                                    @amWhy I think it's an interesting bit of history that addresses the OP's question, which is why I posted it. Of course one can link to the full text of the book - if you know that it contains something relevant. Feel free to downvote.
                                    – Ethan Bolker
                                    Dec 27 '16 at 0:25






                                  • 1




                                    I don't question that. But it remains little more than a "link-only" answer.
                                    – amWhy
                                    Dec 27 '16 at 0:29








                                  • 1




                                    I'm not trying to be mean. And I haven't flagged the answer, nor voted to delete it as a link-only answer. I'm trying to inform you, that's all.
                                    – amWhy
                                    Dec 27 '16 at 0:32






                                  • 1




                                    @amWhy I don't find your comments mean (and shouldn't have been snarky about downvote). I too dislike link only answers. I agree that this one's a close call. We just came down on different sides.
                                    – Ethan Bolker
                                    Dec 27 '16 at 0:49
















                                  2














                                  One answer to




                                  What is so wrong with thinking of real numbers as infinite decimals?




                                  is that it's somewhat old fashioned. Another is that it doesn't work really well in more advanced courses in analysis. But Courant's calculus, one of the best texts ever, shows this in Section 2 of the first chapter:



                                  enter image description here



                                  That's a screenshot from page 8; you can see it at



                                  https://archive.org/stream/DifferentialIntegralCalculusVolI/Courant-DifferentialIntegralCalculusVolI#page/n23/mode/2up






                                  share|cite|improve this answer



















                                  • 1




                                    @amWhy I think it's an interesting bit of history that addresses the OP's question, which is why I posted it. Of course one can link to the full text of the book - if you know that it contains something relevant. Feel free to downvote.
                                    – Ethan Bolker
                                    Dec 27 '16 at 0:25






                                  • 1




                                    I don't question that. But it remains little more than a "link-only" answer.
                                    – amWhy
                                    Dec 27 '16 at 0:29








                                  • 1




                                    I'm not trying to be mean. And I haven't flagged the answer, nor voted to delete it as a link-only answer. I'm trying to inform you, that's all.
                                    – amWhy
                                    Dec 27 '16 at 0:32






                                  • 1




                                    @amWhy I don't find your comments mean (and shouldn't have been snarky about downvote). I too dislike link only answers. I agree that this one's a close call. We just came down on different sides.
                                    – Ethan Bolker
                                    Dec 27 '16 at 0:49














                                  2












                                  2








                                  2






                                  One answer to




                                  What is so wrong with thinking of real numbers as infinite decimals?




                                  is that it's somewhat old fashioned. Another is that it doesn't work really well in more advanced courses in analysis. But Courant's calculus, one of the best texts ever, shows this in Section 2 of the first chapter:



                                  enter image description here



                                  That's a screenshot from page 8; you can see it at



                                  https://archive.org/stream/DifferentialIntegralCalculusVolI/Courant-DifferentialIntegralCalculusVolI#page/n23/mode/2up






                                  share|cite|improve this answer














                                  One answer to




                                  What is so wrong with thinking of real numbers as infinite decimals?




                                  is that it's somewhat old fashioned. Another is that it doesn't work really well in more advanced courses in analysis. But Courant's calculus, one of the best texts ever, shows this in Section 2 of the first chapter:



                                  enter image description here



                                  That's a screenshot from page 8; you can see it at



                                  https://archive.org/stream/DifferentialIntegralCalculusVolI/Courant-DifferentialIntegralCalculusVolI#page/n23/mode/2up







                                  share|cite|improve this answer














                                  share|cite|improve this answer



                                  share|cite|improve this answer








                                  answered Dec 27 '16 at 0:12


























                                  community wiki





                                  Ethan Bolker









                                  • 1




                                    @amWhy I think it's an interesting bit of history that addresses the OP's question, which is why I posted it. Of course one can link to the full text of the book - if you know that it contains something relevant. Feel free to downvote.
                                    – Ethan Bolker
                                    Dec 27 '16 at 0:25






                                  • 1




                                    I don't question that. But it remains little more than a "link-only" answer.
                                    – amWhy
                                    Dec 27 '16 at 0:29








                                  • 1




                                    I'm not trying to be mean. And I haven't flagged the answer, nor voted to delete it as a link-only answer. I'm trying to inform you, that's all.
                                    – amWhy
                                    Dec 27 '16 at 0:32






                                  • 1




                                    @amWhy I don't find your comments mean (and shouldn't have been snarky about downvote). I too dislike link only answers. I agree that this one's a close call. We just came down on different sides.
                                    – Ethan Bolker
                                    Dec 27 '16 at 0:49














                                  • 1




                                    @amWhy I think it's an interesting bit of history that addresses the OP's question, which is why I posted it. Of course one can link to the full text of the book - if you know that it contains something relevant. Feel free to downvote.
                                    – Ethan Bolker
                                    Dec 27 '16 at 0:25






                                  • 1




                                    I don't question that. But it remains little more than a "link-only" answer.
                                    – amWhy
                                    Dec 27 '16 at 0:29








                                  • 1




                                    I'm not trying to be mean. And I haven't flagged the answer, nor voted to delete it as a link-only answer. I'm trying to inform you, that's all.
                                    – amWhy
                                    Dec 27 '16 at 0:32






                                  • 1




                                    @amWhy I don't find your comments mean (and shouldn't have been snarky about downvote). I too dislike link only answers. I agree that this one's a close call. We just came down on different sides.
                                    – Ethan Bolker
                                    Dec 27 '16 at 0:49








                                  1




                                  1




                                  @amWhy I think it's an interesting bit of history that addresses the OP's question, which is why I posted it. Of course one can link to the full text of the book - if you know that it contains something relevant. Feel free to downvote.
                                  – Ethan Bolker
                                  Dec 27 '16 at 0:25




                                  @amWhy I think it's an interesting bit of history that addresses the OP's question, which is why I posted it. Of course one can link to the full text of the book - if you know that it contains something relevant. Feel free to downvote.
                                  – Ethan Bolker
                                  Dec 27 '16 at 0:25




                                  1




                                  1




                                  I don't question that. But it remains little more than a "link-only" answer.
                                  – amWhy
                                  Dec 27 '16 at 0:29






                                  I don't question that. But it remains little more than a "link-only" answer.
                                  – amWhy
                                  Dec 27 '16 at 0:29






                                  1




                                  1




                                  I'm not trying to be mean. And I haven't flagged the answer, nor voted to delete it as a link-only answer. I'm trying to inform you, that's all.
                                  – amWhy
                                  Dec 27 '16 at 0:32




                                  I'm not trying to be mean. And I haven't flagged the answer, nor voted to delete it as a link-only answer. I'm trying to inform you, that's all.
                                  – amWhy
                                  Dec 27 '16 at 0:32




                                  1




                                  1




                                  @amWhy I don't find your comments mean (and shouldn't have been snarky about downvote). I too dislike link only answers. I agree that this one's a close call. We just came down on different sides.
                                  – Ethan Bolker
                                  Dec 27 '16 at 0:49




                                  @amWhy I don't find your comments mean (and shouldn't have been snarky about downvote). I too dislike link only answers. I agree that this one's a close call. We just came down on different sides.
                                  – Ethan Bolker
                                  Dec 27 '16 at 0:49











                                  0














                                  I think it's actually good to teach people that definition of a real number when they're in university and old enough to be able to understand what they're being taught. After reading https://www.inc.com/bill-murphy-jr/science-says-were-sending-our-kids-to-school-much-too-early-and-that-can-hurt-th.html, I think that kids can learn from school better if they wait until they're older to start grade 1 so I think it's also true that when they're older, they're more able to learn that definition of a real number and the reason the teachers is using that definition. I know people probably sometimes learn something too young and misunderstand what the teacher was trying to teach them then later, they have trouble breaking their old habits of what they thought they were getting taught. For example, those who got taught what a fraction is and how to add, subtract, multiply, and divide fractions at such a young age might think the rational numbers are all the numbers and have trouble later breaking their old habits and learning that irrational numbers exist. Maybe people didn't quite get taught the decimal representation of a real number properly in elementry school. I think they should be a taught a very similar definition in university. I had an intuitive idea of some of its properties of a complete ordered field but did not think of the property of completeness on my own when I was in elementry school. Later, I once tried to figure out what a real number is and then came up with my own definition. I first constructed the dyadic rationals, the terminating decimals in base 2 then for each cut that has no boundary position as a number in that set, invented a number to lie between those cuts and saw that it does something like correspond to base 2 decimal notations that forbid trailing 1's. I don't think that once people are in university, they should be taught to derive properties from the fact that $(mathbb{R}, 1, 0, +, times, leq)$ is a complete ordered field. I know some people are thinking they can't break their old habits of thinking a decimal representation is a real number when actually the real numbers already existed and somebody invented a notation for each of them. I think it's more important to teach them not to make the unjustified assumption that the real numbers with those operations have those properties of a complete ordered field they did think of. Construction from the Dedekind cuts of the terminating decimals actually works well and from that gives and intuitive way of deciding how to represent each of them but that forbids a string of trailing 9's, different from what I was taught in elementry school that 0.999... = 1. They can then be taught what a complete ordered field is with Modern Algebra being a prerequisite to that course and be taught that +, $times$, and $leq$ have been defined and $(mathbb{R}, 1, 0, +, times, leq)$ has been proven in ZF to be a complete ordered field which is unique up to isomorphism, and that it's even easier to show that it's isomorphic to the base 2 construction than that it's unique up to isomorphism. Maybe that course itself could be a prerequisite to an even later course that teaches how to write a formal proof in ZF where one of the test questions asks you to write a formal proof in ZF that there exists a complete ordered field which is unique up to isomorphism and only those who can figure out how to write a complete formal proof will get full marks so that a job will take on average better applicants.






                                  share|cite|improve this answer




























                                    0














                                    I think it's actually good to teach people that definition of a real number when they're in university and old enough to be able to understand what they're being taught. After reading https://www.inc.com/bill-murphy-jr/science-says-were-sending-our-kids-to-school-much-too-early-and-that-can-hurt-th.html, I think that kids can learn from school better if they wait until they're older to start grade 1 so I think it's also true that when they're older, they're more able to learn that definition of a real number and the reason the teachers is using that definition. I know people probably sometimes learn something too young and misunderstand what the teacher was trying to teach them then later, they have trouble breaking their old habits of what they thought they were getting taught. For example, those who got taught what a fraction is and how to add, subtract, multiply, and divide fractions at such a young age might think the rational numbers are all the numbers and have trouble later breaking their old habits and learning that irrational numbers exist. Maybe people didn't quite get taught the decimal representation of a real number properly in elementry school. I think they should be a taught a very similar definition in university. I had an intuitive idea of some of its properties of a complete ordered field but did not think of the property of completeness on my own when I was in elementry school. Later, I once tried to figure out what a real number is and then came up with my own definition. I first constructed the dyadic rationals, the terminating decimals in base 2 then for each cut that has no boundary position as a number in that set, invented a number to lie between those cuts and saw that it does something like correspond to base 2 decimal notations that forbid trailing 1's. I don't think that once people are in university, they should be taught to derive properties from the fact that $(mathbb{R}, 1, 0, +, times, leq)$ is a complete ordered field. I know some people are thinking they can't break their old habits of thinking a decimal representation is a real number when actually the real numbers already existed and somebody invented a notation for each of them. I think it's more important to teach them not to make the unjustified assumption that the real numbers with those operations have those properties of a complete ordered field they did think of. Construction from the Dedekind cuts of the terminating decimals actually works well and from that gives and intuitive way of deciding how to represent each of them but that forbids a string of trailing 9's, different from what I was taught in elementry school that 0.999... = 1. They can then be taught what a complete ordered field is with Modern Algebra being a prerequisite to that course and be taught that +, $times$, and $leq$ have been defined and $(mathbb{R}, 1, 0, +, times, leq)$ has been proven in ZF to be a complete ordered field which is unique up to isomorphism, and that it's even easier to show that it's isomorphic to the base 2 construction than that it's unique up to isomorphism. Maybe that course itself could be a prerequisite to an even later course that teaches how to write a formal proof in ZF where one of the test questions asks you to write a formal proof in ZF that there exists a complete ordered field which is unique up to isomorphism and only those who can figure out how to write a complete formal proof will get full marks so that a job will take on average better applicants.






                                    share|cite|improve this answer


























                                      0












                                      0








                                      0






                                      I think it's actually good to teach people that definition of a real number when they're in university and old enough to be able to understand what they're being taught. After reading https://www.inc.com/bill-murphy-jr/science-says-were-sending-our-kids-to-school-much-too-early-and-that-can-hurt-th.html, I think that kids can learn from school better if they wait until they're older to start grade 1 so I think it's also true that when they're older, they're more able to learn that definition of a real number and the reason the teachers is using that definition. I know people probably sometimes learn something too young and misunderstand what the teacher was trying to teach them then later, they have trouble breaking their old habits of what they thought they were getting taught. For example, those who got taught what a fraction is and how to add, subtract, multiply, and divide fractions at such a young age might think the rational numbers are all the numbers and have trouble later breaking their old habits and learning that irrational numbers exist. Maybe people didn't quite get taught the decimal representation of a real number properly in elementry school. I think they should be a taught a very similar definition in university. I had an intuitive idea of some of its properties of a complete ordered field but did not think of the property of completeness on my own when I was in elementry school. Later, I once tried to figure out what a real number is and then came up with my own definition. I first constructed the dyadic rationals, the terminating decimals in base 2 then for each cut that has no boundary position as a number in that set, invented a number to lie between those cuts and saw that it does something like correspond to base 2 decimal notations that forbid trailing 1's. I don't think that once people are in university, they should be taught to derive properties from the fact that $(mathbb{R}, 1, 0, +, times, leq)$ is a complete ordered field. I know some people are thinking they can't break their old habits of thinking a decimal representation is a real number when actually the real numbers already existed and somebody invented a notation for each of them. I think it's more important to teach them not to make the unjustified assumption that the real numbers with those operations have those properties of a complete ordered field they did think of. Construction from the Dedekind cuts of the terminating decimals actually works well and from that gives and intuitive way of deciding how to represent each of them but that forbids a string of trailing 9's, different from what I was taught in elementry school that 0.999... = 1. They can then be taught what a complete ordered field is with Modern Algebra being a prerequisite to that course and be taught that +, $times$, and $leq$ have been defined and $(mathbb{R}, 1, 0, +, times, leq)$ has been proven in ZF to be a complete ordered field which is unique up to isomorphism, and that it's even easier to show that it's isomorphic to the base 2 construction than that it's unique up to isomorphism. Maybe that course itself could be a prerequisite to an even later course that teaches how to write a formal proof in ZF where one of the test questions asks you to write a formal proof in ZF that there exists a complete ordered field which is unique up to isomorphism and only those who can figure out how to write a complete formal proof will get full marks so that a job will take on average better applicants.






                                      share|cite|improve this answer














                                      I think it's actually good to teach people that definition of a real number when they're in university and old enough to be able to understand what they're being taught. After reading https://www.inc.com/bill-murphy-jr/science-says-were-sending-our-kids-to-school-much-too-early-and-that-can-hurt-th.html, I think that kids can learn from school better if they wait until they're older to start grade 1 so I think it's also true that when they're older, they're more able to learn that definition of a real number and the reason the teachers is using that definition. I know people probably sometimes learn something too young and misunderstand what the teacher was trying to teach them then later, they have trouble breaking their old habits of what they thought they were getting taught. For example, those who got taught what a fraction is and how to add, subtract, multiply, and divide fractions at such a young age might think the rational numbers are all the numbers and have trouble later breaking their old habits and learning that irrational numbers exist. Maybe people didn't quite get taught the decimal representation of a real number properly in elementry school. I think they should be a taught a very similar definition in university. I had an intuitive idea of some of its properties of a complete ordered field but did not think of the property of completeness on my own when I was in elementry school. Later, I once tried to figure out what a real number is and then came up with my own definition. I first constructed the dyadic rationals, the terminating decimals in base 2 then for each cut that has no boundary position as a number in that set, invented a number to lie between those cuts and saw that it does something like correspond to base 2 decimal notations that forbid trailing 1's. I don't think that once people are in university, they should be taught to derive properties from the fact that $(mathbb{R}, 1, 0, +, times, leq)$ is a complete ordered field. I know some people are thinking they can't break their old habits of thinking a decimal representation is a real number when actually the real numbers already existed and somebody invented a notation for each of them. I think it's more important to teach them not to make the unjustified assumption that the real numbers with those operations have those properties of a complete ordered field they did think of. Construction from the Dedekind cuts of the terminating decimals actually works well and from that gives and intuitive way of deciding how to represent each of them but that forbids a string of trailing 9's, different from what I was taught in elementry school that 0.999... = 1. They can then be taught what a complete ordered field is with Modern Algebra being a prerequisite to that course and be taught that +, $times$, and $leq$ have been defined and $(mathbb{R}, 1, 0, +, times, leq)$ has been proven in ZF to be a complete ordered field which is unique up to isomorphism, and that it's even easier to show that it's isomorphic to the base 2 construction than that it's unique up to isomorphism. Maybe that course itself could be a prerequisite to an even later course that teaches how to write a formal proof in ZF where one of the test questions asks you to write a formal proof in ZF that there exists a complete ordered field which is unique up to isomorphism and only those who can figure out how to write a complete formal proof will get full marks so that a job will take on average better applicants.







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited Dec 10 '18 at 3:30


























                                      community wiki





                                      2 revs
                                      Timothy































                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.





                                          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                          Please pay close attention to the following guidance:


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2073041%2fwhat-is-so-wrong-with-thinking-of-real-numbers-as-infinite-decimals%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Bressuire

                                          Cabo Verde

                                          Gyllenstierna