I'm still not convinced 0.9999... qualifies as a real number. If 0.9999.. and 1 do not differ by a real amount but do differ by a non-real amount, would that mean they are different?
I don't know what that means. What is differing by a non-real amount?
The same way it is not the case that 3 = pi in the integer world, they do not differ by an integer but by a non-integer. Maybe 0.9999... differs from 1 by a non-real like a hyper-real or something not invented or even something inconceivable.
(I also happen to have a degree in computer science).
It has been my experience that computer scientists simply don't grok the real numbers. In CS everything is computable (except the halting problem, which is the only time in their lives CS majors ever contemplate non-computable phenomena). But almost all real numbers (that is, all but a set of measure zero) are non-computable.
I've heard of plenty of proven undecidable problems. https://en.wikipedia.org/wiki/List_of_u ... e_problems
there's a bunch. It assumes the turing machine is the most powerful type of computing device like with the Church–Turing thesis assuming the human brain is not capable of more. It certainly looks like that's the case but still not proven though.
On a side note I used to make turing machines for fun. Here's one I did that calculates pi as far as I want:
it looks like this https://image.ibb.co/fLR9RF/turingmachine.jpg
The way it would work is I put a decimal number like "20" on the memory tape and then it would proceed to manipulate the tape until it finished with "3.14159265358979323846" I had it convert from decimal to binary to do the computations and then binary back to decimal too. The whole thing functions with binary operators at the lowest level
compare(0.9, 1) = -1
compare(0.99, 1) = -1
compare(0.999, 1) = -1
compare(0.9999, 1) = -1
compare(0.99999, 1) = -1
compare(0.999999, 1) = -1
The limit as x->1 of compare(x, 1) = -1, agreed? Using this slightly different procedure to compare two real numbers, would this not prove that 0.9999... < 1?
We already agree that .9999...9 (finite number of 9's) is strictly less than 1.
However if we have a collection of numbers x_n such that x_n < 1 for each n, and the sequence x_n has the limit x, then the best we can say is that x <= 1. In other words "less than" turns to "less than or equal" in the limit.
Right I get this, the red flag comes up when we see we're solving for what the value approaches rather than what the actual values is. For example say we have the function f(x) = (x-2)/(x-2) where the graph will appear like a horizontal line at y=1 except it will have a hollow point at (2,1) . The function has no value at f(2). But if we take the limit x->2 of f(x) we get 1. Or if have a piece-wise function f(x) = x when x is not equal to 5 and 15 when x=5. This function will appear like the graph y=x except it will jump at a single point at (5,15). We know in this case the actual value at f(5) = 15 but the limit x->5 of f(x) = 5.
Sure, I get this.
Certainly we agree that 1/n > 0 for every n, yet limit(n->infinity) 1/n = 0.
* ps -- By the way -- on a totally different topic of interest --
The Compare function on the real numbers is extremely complicated! I'm sure you know that when you need to compare two floating point numbers, you can't just say "if float1 == float2 ...". Rather you have to pick an epsilon, say eps = .0001, and you go, "if abs(float1 - float2) < eps ..." You may have run across this, especially if you did any numerical analysis.
Yeah there's precision issues with floats, there's either precision with small values or it handles larger numbers but loses decimal precision as a result. I usually avoid the floating point datatype and just use doubles myself. I've taken numerical analysis courses too. One time I made a calculator using some of the techniques there that would compute any arithmetic function like sin, cos, e^x, ln, log, x^y, as many decimals as I wanted. Here's the source code http://www.vbcode.com/Asp/showzip.asp?Z ... theID=8585
did that 14 years ago when I was bored
I Googled around and comparing floats is a very complicated subject. That's another area by the way in which computer scientists don't quite get reals. Floating point numbers are not real numbers, they're a lot different. So all the intuitions they teach you in CS are the wrong ones for math. But floating point numbers themselves are quite interesting. I doubt .999... = 1 in floating point. The problem is that you can't TELL if they're equal, only close to each other.
Yeah the floats suck. I just use double which does have limits but if I want to go beyond that I just create my own datatype like with that calculator. I can tell that 2 numbers are exactly equal with that.
I was leafing through a book on numerical analysis in a bookstore once. This is my ONLY exposure to the subject but it was a good one. They gave the example of the famous harmonic series 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + ... In freshman calculus they give a beautifully simple proof that this series diverges. Even though the individual terms go to zero, the sequence of partial sums gets as large as you like. This series has no limit.
Yeah it increases as slow as the log function as I recall.
On the other hand on any physical computer, this sequence must converge. As long as available storage is finite, there is some largest denominator N that you can represent, and then 1/n is effectively 0 for all n > N. So the series always has a finite sum. If you converted all the matter in the universe into memory for this computation, the numerical harmonic series must converge.
Yeah once 1/n is rounded to 0 for everything I suppose that's as far as it goes.
The real numbers are very different than computational models of real numbers. Most real numbers can not be computed. There is no Turing machine that generates their digits. The real numbers lie far beyond computer science.
Computer Science is part of applied math, not so much the theoretical stuff. A computer should be able to do anything your brain can though according to the church's thesis. I'd even go as far as to say computer science could create artificial intelligence and a technological singularity that would cause its intelligence to dwarf ours https://en.wikipedia.org/wiki/Technological_singularity
. At that point a strong AI could discover more math than any human ever could, in theory.
I'm still undecided whether to vote "0.9999... < 1", "0.9999... = 1", or "don't know". Because of conflicting assumptions that can make either possibility true. They can't both be true in the same context, according to the non-contradiction axiom anyway. I'm sure I'll vote "don't know" in the end since I just know better than to pretend to know
. Just because I wrote a proof for 0.9999... < 1 does not mean I accept it. It represents a rational justification for 0.9999... < 1. This is a mathematical proof someone wrote that 0.9999... < 1 https://arxiv.org/pdf/0811.0164.pdf
that looks more thorough