Is 0.9999... really the same as 1?

What is the basis for reason? And mathematics?

Moderators: AMod, iMod

Is 0.9999... really the same as 1?

Yes 0.9999... = 1
5
31%
No 0.9999... is slightly less than 1
7
44%
Other
4
25%
 
Total votes: 16

User avatar
A_Seagull
Posts: 907
Joined: Thu Jun 05, 2014 11:09 pm

Re: Is 0.9999... really the same as 1?

Post by A_Seagull »

wtf wrote:[
What they eventually all did was finally make calculus legit. The way they did it was:

* To base everything on set theory and formal rules of logical derivation.

* To build the natural numbers out of the empty set and the rules of set theory, then to build the integers out of the naturals, the rationals out of the integers, and finally the reals out of the rationals.

.
It seems to me that the cure is worse than the disease. To derive simple integers from complex set theory is like taking sparkling diamonds, whose existence is unquestioned, and grinding them into coal dust and thinking something useful has been achieved.

While the concepts around limits was perhaps logically non-rigorous, there was no ambiguity in the way calculus was used. I think it must be questioned what the philosophers of mathematics were trying to achieve and whether it was justifiable. And whether integers derived from sets actually constitutes an achievement.
wtf
Posts: 1178
Joined: Tue Sep 08, 2015 11:36 pm

Re: Is 0.9999... really the same as 1?

Post by wtf »

A_Seagull wrote: It seems to me that the cure is worse than the disease. To derive simple integers from complex set theory is like taking sparkling diamonds, whose existence is unquestioned, and grinding them into coal dust and thinking something useful has been achieved.
That's an argument against reductionism in general. A poet looks at a rainbow and sees unicorns and faeries. A physicist looks at a rainbow and sees diffraction and explains it in terms of quantum electrodynamics. Is that a bad thing? I don't know. Feynman discussed that point. He said that understanding rainbows in terms of science adds to our wonder at their beauty, it doesn't subtract from it.

But your criticism is far more general than the simple history of the formalization of calculus that I outlined. Your attack is on the entire western way of thinking. I'm not saying you're wrong, of course, only unfocussed.

Also it's interesting that you used the phrase "complex set theory" when in fact I wrote that the axioms are "perfectly reasonable." I just wanted to note your rhetorical tactic.

Do you think Euclid was wrong when he axiomitized geometry? After all, aren't points and lines "sparkling diamonds" too? Do you object to axiomatics in general or just in set theory?
A_Seagull wrote: While the concepts around limits was perhaps logically non-rigorous, there was no ambiguity in the way calculus was used.
Yes there was, otherwise nobody would have cared. As 19th century mathematicians worked with advanced infinite series such as the trigonometric series of Fourier, the lack of a logical foundation became a very obvious problem that needed to be solved.
A_Seagull wrote: I think it must be questioned what the philosophers of mathematics were trying to achieve
But it was not philosophers but the hard-core mathematicians themselves who saw the need for logical rigor. Of course philosophers were involved too, Russell and Frege and many others. But it was the 19th century mathematicians who saw the need to get the foundations right. That's Cauchy and Weierstrass.

It's worth noting that Cantor did not wake up one day and say, "I'm going to revolutionize mathematical foundations and piss off Kronecker." Rather, Cantor was engaged in the study of the zeros of trigonometric series -- math that came directly from Fourier's studies of the physics of heat. A purely physical consideration led directly to set theory. In order to study the zeros of trig series, Cantor was led to discover transfinite ordinals. That's a point not often noted in popularizations, but it's perfectly well documented. From the physics of heat to Fourier series to transfinite ordinals and cardinals. That's the historical progression.
A_Seagull wrote: and whether it was justifiable. And whether integers derived from sets actually constitutes an achievement.
Intuitionists and structuralists have of course objected to the standard arithmetization of analysis. Paul Benacerraf got the structuralist ball rolling by pointing out that numbers can not possibly "be" sets, they can only be modeled by sets. A point that was insightful at the time but that is now taken to be obvious as the structuralist revolution (in the form of Category theory) rolls through modern math.

But of course set theory and 20th century math have been a great achievement. Without proper rigor we could not be sure we know what we're talking about. If you don't like 20th century math you have to throw away 20th century physics too. We're back to the unicorn and faery theory of rainbows.

Are you advocating against scientific reductionism altogether, or only set theory? And why? How much of modernity do you wish to throw away in the name of "sparkling diamonds" of naive intuition?
Last edited by wtf on Sat Mar 04, 2017 7:53 pm, edited 2 times in total.
marsh8472
Posts: 54
Joined: Sun Oct 19, 2014 3:06 pm

Re: Is 0.9999... really the same as 1?

Post by marsh8472 »

wtf wrote:
marsh8472 wrote: But should 0.99999... even be considered a real number? It's a series that has no end so how can a value be obtained without an ending to the series? According to the definition, real numbers is a set of all rational and irrational numbers. A rational number is one that can be written as a fraction. We cannot write 0.99999... as a fraction unless we want to just assume it equals 1 but this would be circular reasoning if we wanted to prove 0.9999... is a real number. If 0.999999... were irrational this would contradict the idea that it equals 1, a rational number. If the Archimedean axiom states that there is no smallest number less than 1, would this not show that 0.99999... is not a number?
Those were once great questions. These exact problems bedevilled mathematicians for 200 years after Newton. Everyone knew that calculus worked but nobody knew how to make the idea of a limit logically rigorous. This is from 1687, say, the year of the publication of the Principia, to the 1880's, the era of Cauchy and Weirstrass and Cantor.

In fact it was during those two centuries that mathematicians were trying advance and apply calculus but were increasingly concerned about the lack of a proper logical foundation. Nobody really knew what a limit was. People started paying attention to the problem.

What they eventually all did was finally make calculus legit. The way they did it was:

* To base everything on set theory and formal rules of logical derivation.

* To build the natural numbers out of the empty set and the rules of set theory, then to build the integers out of the naturals, the rationals out of the integers, and finally the reals out of the rationals.

* Now that we had a logically rigorous theory of the real numbers, we could finally, 200 years after Newton, provide a logically rigorous explanation of dy/dx and the limits of infinite series. We finally tamed Berkeley's "Ghosts of departed quantities." (George Berkeley, my favorite philosopher).

* As a byproduct of this work, infinitesimals were banished from math. It's true that there are logically consistent alternate models that contain infinitesimals, and they are of interest in their own right. But in standard math, there are no infinitesimals.

This entire intellectual project is known as the Arithmetization of Analysis. The Wiki article https://en.wikipedia.org/wiki/Arithmeti ... f_analysis is short but has some links. Analysis is the fancy word for calculus, and by arithmetization they mean basing the math of the infinite and the continuous, on the axioms of set theory and rules of deduction.

In short, we are finally be able to base the continuous on the discrete.

The tl;dr is that you are right to ask those questions, but they all got solved around 1870-1930. Today when we say that .999... = 1 we can prove it directly from a set of axioms that everyone agrees are reasonable. But yeah, for two hundred years calculus was totally bogus. Today it's legit.
It looks like it's more like a preference than a truth. I'm thinking they could make math work whether 0.999999... = 1 or not

One of the more convincing proofs I was thinking about is this one

1/3 = 0.333333...
2/3 = 0.666666...
add them
3/3 = 0.999999...

In an attempt to challenge a proof like that I would describe the algorithm used to determine the digits in the sum of two decimals in general when adding from right to left.

The digits of d and e when determining their sum of f:

d1 . d2 d3 d4 d5 ... dk d(k+1) d(k+2) ...
e1 . e2 e3 e4 e5 ... ek e(k+1) e(k+2) ...
add them
f1 . f2 f3 f4 f5 ... fk f(k+1) f(k+2) ...

The algorithm for determining the digit fk would go like this

f(k) = dk + ek     if d(k+1) + e(k+1) < 9
    = dk + ek + 1  if d(k+1) + e(k+1) > 9
    = 9          if d(k+1) + e(k-1) = 9 and f(k+1) = 9
    = 0          if d(k+1) + e(k+1) = 9 and f(k+1) = 0

We can't know what the digits of the sum are in this case, from this right to left adding process standpoint, without knowing whether there is a carry over or not in the next digit. Since 0.9999... has infinite non-terminating digits the value of f(k) is indeterminate when applying this algorithm. If someone were evaluating a proof of 0.999999... = 1 they could have reason to reject this one because stating 0.333333... + 0.666666... = 0.999999... implies that it is known the first 6 decimal places of the sum of 0.333333... + 0.666666... are all 9's, a bit misleading.
wtf
Posts: 1178
Joined: Tue Sep 08, 2015 11:36 pm

Re: Is 0.9999... really the same as 1?

Post by wtf »

marsh8472 wrote: It looks like it's more like a preference than a truth.
Please note that I made no claim that .999... = 1 is true. I only say that it's a valid derivation from a set of axioms. We have no referent for .999... in physics, since we can't measure below the Planck length. I make this distinction so that we can think clearly about the "truth" (meaning syntactic validity) in math, versus truth about the world.

I think you mean true in the math sense but I just wanted to make that point again. Nobody says .999... = 1 is a truth about the world. It's only a truth within formal math.
marsh8472 wrote: I'm thinking they could make math work whether 0.999999... = 1 or not
Possibly, but I don't think the rest of your post supports that claim. Rather, you presented a perfectly correct refutation of a common fake proof.
marsh8472 wrote: One of the more convincing proofs I was thinking about is this one

1/3 = 0.333333...
2/3 = 0.666666...
add them
3/3 = 0.999999...
Doesn't convince me. If you don't believe .999... = 1 then why would you believe .333... = 1/3? The reason is that we are taught .333 = 1 earlier in our education than .999... = 1. But in fact both statements depend on a proper definition of the real numbers, limits, and infinite sums. That's why this is a fake proof. It's a heuristic for beginners but not an actual proof. Of course it DOES turn out to be valid, but only after we rigorize the theory of infinite series. By which time, .999... = 1 is trival.

Secondly -- and this bears on the correct point you are about to make -- even if 1/3 = .333..., how can we multiply each term by a constant, or add two infinite series? To do that we need to prove a theorem that says we can do that. And once we prove that theorem, .999... = 1 follows easily.
marsh8472 wrote: In an attempt to challenge a proof like that
Your reasoning is absolutely correct. The problem is that you are challenging a fake proof. There's no reason to believe that .333... = 1, or that we can term-by-term add two infinite series, or term-by-term multiply the terms of an infinite series.

Remember that the distributive law of the real numbers says that a(x + y) = ax + by. We can use induction to extend that result to any FINITE sum: a(x + y + z + w) = ax + ab + az + aw. But we may NOT multiply an infinite sum term by term. In fact we have no idea what it means to write down an infinite sum. That is the content of the rigorous version of calculus. Once we have those theorems -- term by term addition of series and term by term multiplication by a constant -- only then does the above prove become valid. But by the time we do that, .999... = 1 is already proven. So this .333... business is not a proof, it's a story for beginners.

That said, your reasoning is perfectly correct. We can NOT blithely add two infinite series term by term until we have carefully defined what we mean by the sum of an infinite series, and then PROVED that we can add term by term. Till we do that, it's handwavy stories for the tourists.
marsh8472 wrote: I would describe the algorithm used to determine the digits in the sum of two decimals in general when adding from right to left.

The digits of d and e when determining their sum of f:

d1 . d2 d3 d4 d5 ... dk d(k+1) d(k+2) ...
e1 . e2 e3 e4 e5 ... ek e(k+1) e(k+2) ...
add them
f1 . f2 f3 f4 f5 ... fk f(k+1) f(k+2) ...

The algorithm for determining the digit fk would go like this

f(k) = dk + ek     if d(k+1) + e(k+1) < 9
    = dk + ek + 1  if d(k+1) + e(k+1) > 9
    = 9          if d(k+1) + e(k-1) = 9 and f(k+1) = 9
    = 0          if d(k+1) + e(k+1) = 9 and f(k+1) = 0

We can't know what the digits of the sum are in this case, from this right to left adding process standpoint, without knowing whether there is a carry over or not in the next digit. Since 0.9999... has infinite non-terminating digits the value of f(k) is indeterminate when applying this algorithm. If someone were evaluating a proof of 0.999999... = 1 they could have reason to reject this one because stating 0.333333... + 0.666666... = 0.999999... implies that it is known the first 6 decimal places of the sum of 0.333333... + 0.666666... are all 9's, a bit misleading.
You are entirely correct. That's exactly why mathematicians struggled for two centuries to put calculus on a logically rigorous footing. Something they did around 1900 plus/minus twenty years in each direction. Now we don't TEACH this rigorization to anyone till Real Analysis class, which is only taken by math majors (and a few brave physics majors). So people end up asking the same questions mathematicians did until a satisfactory logical structure for infinite series was worked out at the turn of the twentieth century.

Today we can prove that it's valid to add two convergent infinite series term-by-term. But without this sophisticated proof, your objection is correct. It's misleading to add two infinite series term-by-term as if it's ok to do that, before we've shown that it's ok to do that.
User avatar
A_Seagull
Posts: 907
Joined: Thu Jun 05, 2014 11:09 pm

Re: Is 0.9999... really the same as 1?

Post by A_Seagull »

wtf wrote:
A_Seagull wrote: It seems to me that the cure is worse than the disease. To derive simple integers from complex set theory is like taking sparkling diamonds, whose existence is unquestioned, and grinding them into coal dust and thinking something useful has been achieved.
That's an argument against reductionism in general. A poet looks at a rainbow and sees unicorns and faeries. A physicist looks at a rainbow and sees diffraction and explains it in terms of quantum electrodynamics. Is that a bad thing? I don't know. Feynman discussed that point. He said that understanding rainbows in terms of science adds to our wonder at their beauty, it doesn't subtract from it.

But your criticism is far more general than the simple history of the formalization of calculus that I outlined. Your attack is on the entire western way of thinking. I'm not saying you're wrong, of course, only unfocussed.

Also it's interesting that you used the phrase "complex set theory" when in fact I wrote that the axioms are "perfectly reasonable." I just wanted to note your rhetorical tactic.

Do you think Euclid was wrong when he axiomitized geometry? After all, aren't points and lines "sparkling diamonds" too? Do you object to axiomatics in general or just in set theory?
A_Seagull wrote: While the concepts around limits was perhaps logically non-rigorous, there was no ambiguity in the way calculus was used.
Yes there was, otherwise nobody would have cared. As 19th century mathematicians worked with advanced infinite series such as the trigonometric series of Fourier, the lack of a logical foundation became a very obvious problem that needed to be solved.
A_Seagull wrote: I think it must be questioned what the philosophers of mathematics were trying to achieve
But it was not philosophers but the hard-core mathematicians themselves who saw the need for logical rigor. Of course philosophers were involved too, Russell and Frege and many others. But it was the 19th century mathematicians who saw the need to get the foundations right. That's Cauchy and Weierstrass.

It's worth noting that Cantor did not wake up one day and say, "I'm going to revolutionize mathematical foundations and piss off Kronecker." Rather, Cantor was engaged in the study of the zeros of trigonometric series -- math that came directly from Fourier's studies of the physics of heat. A purely physical consideration led directly to set theory. In order to study the zeros of trig series, Cantor was led to discover transfinite ordinals. That's a point not often noted in popularizations, but it's perfectly well documented. From the physics of heat to Fourier series to transfinite ordinals and cardinals. That's the historical progression.
A_Seagull wrote: and whether it was justifiable. And whether integers derived from sets actually constitutes an achievement.
Intuitionists and structuralists have of course objected to the standard arithmetization of analysis. Paul Benacerraf got the structuralist ball rolling by pointing out that numbers can not possibly "be" sets, they can only be modeled by sets. A point that was insightful at the time but that is now taken to be obvious as the structuralist revolution (in the form of Category theory) rolls through modern math.

But of course set theory and 20th century math have been a great achievement. Without proper rigor we could not be sure we know what we're talking about. If you don't like 20th century math you have to throw away 20th century physics too. We're back to the unicorn and faery theory of rainbows.

Are you advocating against scientific reductionism altogether, or only set theory? And why? How much of modernity do you wish to throw away in the name of "sparkling diamonds" of naive intuition?

I am not at all opposed to reductionism. The claim for sets as the foundation to maths is not even reductionism. I don't believe that you can take the general principles of maths and reduce them set theory. Instead what has happened is that mathematicians have arbitrarily introduced the concept of sets and then proceeded to deduce something that resembles number theory from them. There is no inherent reason why it should be sets rather than some other theory that is used to deduce something that resembles number theory.

In order to introduce set theory into the system of maths, new axioms and processes of inference have to be introduced. As I see it this makes it all unduly complicated. Far more straightforward to simply introduce "0.9999999.... = 1" as an axiom. Job done! :)
Melchior
Posts: 839
Joined: Mon Apr 28, 2014 3:20 pm

Re: Is 0.9999... really the same as 1?

Post by Melchior »

A_Seagull wrote:An intuitive way of understanding why 0.99999999...... = 1 is that it is not possible to create a number between the two.. i.e. there is no way of having a number that is larger than 0.99999999.... and yet smaller than 1. So the two descriptions must be of the same number.
Nonsense!
wtf
Posts: 1178
Joined: Tue Sep 08, 2015 11:36 pm

Re: Is 0.9999... really the same as 1?

Post by wtf »

Melchior wrote:
A_Seagull wrote:An intuitive way of understanding why 0.99999999...... = 1 is that it is not possible to create a number between the two.. i.e. there is no way of having a number that is larger than 0.99999999.... and yet smaller than 1. So the two descriptions must be of the same number.
Nonsense!
I would say A_Seagull has it exactly right. That is the correct mathematical intuition. If .999... = 1 were false, their difference would be nonzero. So it would be, say, some small number ε > 0. That's the Greek lower case epsilon, typically used in math to denote an arbitrarily small (but nonzero!) real number.

Now you claim that 1 - .999... = ε, so that 1 and .999... are not the same number.

But no matter how small ε is, I can always find a natural number n such that .999... (n decimal places) is smaller than epsilon. That is a consequence of the Archimedean propertyof the real numbers.

For example if ε = 1/10 I can just take n = 2, and then 1 - .99 = .01 < .1 = ε. You can do this for any ε no matter how small, as long as it's a real number greater than zero.

Since there can be no nonzero difference (no possible ε) between .999... and 1, they represent the same number. That's the proof.

You didn't say why you think it's nonsense. Perhaps you are thinking this is saying something about the real world. It's not. It's saying something about the real numbers, which are a pure mathematical abstraction. It's valid in its own sphere, like chess. Use at your own risk! All of physics is based on the real numbers, which presents many philosophical questions.

But if we stick strictly to the math, there is simply no question that .999... = 1.
User avatar
Greta
Posts: 4389
Joined: Sat Aug 08, 2015 8:10 am

Re: Is 0.9999... really the same as 1?

Post by Greta »

wtf changed my mind earlier in the thread but I can't change my prior vote (for < 1). I would now vote that 0.999... = 1. Two different ways of expressing the same concept.

Ideally the current vote count in this thread would be 2 votes for each category.
wtf
Posts: 1178
Joined: Tue Sep 08, 2015 11:36 pm

Re: Is 0.9999... really the same as 1?

Post by wtf »

Greta wrote:wtf changed my mind earlier in the thread but I can't change my prior vote (for < 1). I would now vote that 0.999... = 1. Two different ways of expressing the same concept.

Ideally the current vote count in this thread would be 2 votes for each category.
I didn't even realize there was a vote. It's not actually a matter of opinion, only whether one understands that we're talking about math and not something else. Of course the something else is important too. Physics is intimately related to math in a way that chess isn't. it's not enough to put on formalist blinders. That's the hard part of the question. The inadequacy of the formalist position. It's not just a game. Math is something more than a game, and that's hard to pin down.
User avatar
Greta
Posts: 4389
Joined: Sat Aug 08, 2015 8:10 am

Re: Is 0.9999... really the same as 1?

Post by Greta »

wtf wrote:
Greta wrote:wtf changed my mind earlier in the thread but I can't change my prior vote (for < 1). I would now vote that 0.999... = 1. Two different ways of expressing the same concept.

Ideally the current vote count in this thread would be 2 votes for each category.
I didn't even realize there was a vote. It's not actually a matter of opinion, only whether one understands that we're talking about math and not something else. Of course the something else is important too. Physics is intimately related to math in a way that chess isn't. it's not enough to put on formalist blinders. That's the hard part of the question. The inadequacy of the formalist position. It's not just a game. Math is something more than a game, and that's hard to pin down.
I was satisfied with the formalist blinders for the purpose of the thread's question, leaving others to speak. I was just embarrassed at my vote.

As far as I can tell, maths is a language. Just as we can use words to create stories, mathematicians can create mathematical models that may or may not accord with physical reality.
wtf
Posts: 1178
Joined: Tue Sep 08, 2015 11:36 pm

Re: Is 0.9999... really the same as 1?

Post by wtf »

A_Seagull wrote: I am not at all opposed to reductionism. The claim for sets as the foundation to maths is not even reductionism. I don't believe that you can take the general principles of maths and reduce them set theory.
That's fine. There are many substantive arguments along those lines. Various flavors of intuitionism and structuralism criticize standard set theory. Have you got a particular alternative or criticism in mind?

Set theory is a historically contingent idea, only a century old. It may be out of fashion in a few decades. What of it? It certainly won the 20th century. You say you "don't believe." Can you elaborate? I'm not sure where you're coming from. I could say, "I don't believe in the atomic theory of matter," and it's not clear if I'm simply ignorant of 20th century physics or highly knowledgable and making a deeper argument.
A_Seagull wrote: Instead what has happened is that mathematicians have arbitrarily introduced the concept of sets
As I mentioned earlier, it wasn't exactly arbitrary. Sets were "in the air" at that time. Cantor's transfinite ordinals came directly out of his study of trigonometric series, which came directly from Fourier's study of the physical phenomenon of heat distribution. A guy named Paul du Bois-Reymond actually invented the technique of Cantor's diagonal argument before Cantor did. If Cantor had never lived, someone else would have discovered set theory. And the smartest people in the world, people like Frege, Russell, and Hilbert, jumped on board set theory as soon as they saw it. You can't say this was arbitrary, not with a lot more evidence or some line of argument. It was everything BUT arbitrary. The development of set theory was very natural.

Certainly the idea of collections is natural. If I have two collections I can combine them into one, or consider the set of objects that are in both collections (left-handed redheads). Set theory is a formalization of the natural idea of collections. I am very curious to understand why you object to set theory.

Again, I'd like to hear your argument, not just words like "I don't believe," and "arbitrary." What is the basis of your objections? Perhaps I can answer them. Or perhaps I'll agree with them.
A_Seagull wrote: and then proceeded to deduce something that resembles number theory from them. There is no inherent reason why it should be sets rather than some other theory that is used to deduce something that resembles number theory.
Absolutely correct. Have you heard of the Elementary Theory of the Category of Sets? (ETCS). This is a structuralist account of the natural numbers that uses category theory instead of set theory. Very interesting approach. Note that it doesn't yet have a Wikipedia page. It's cutting edge stuff.

But nobody claims that set theory is the "only" possible foundation for math. That's a strawman argument. Set theory is one foundation for math that won the 20th century and is still the standard foundation. It's not the only possible foundation nor is it without its philosophical and mathematical flaws and difficulties.
A_Seagull wrote: In order to introduce set theory into the system of maths, new axioms and processes of inference have to be introduced. As I see it this makes it all unduly complicated. Far more straightforward to simply introduce "0.9999999.... = 1" as an axiom. Job done! :)
No that would be a disaster. If math worked by saying, "Whatever theorem I want to prove, I'll just make it an axiom," then everyone would have their own list of axioms and nobody could communicate math to anyone else.

The beauty of set theory (and now category theory) is that we have a unified system of axioms that allow us to accurately talk about math with others. That's the point of foundations.

By the way Homotopy type theory (HoTT) is yet another contemporary alternative foundation.

So to sum this up, we have classical (20th century) objections to set theory along the lines of intuitionism and structuralism. We have contemporary alternative foundations such as ETCS and HoTT. And of course we have traditional old set theory, in which major developments are happening every day.

So if you don't like set theory, I have no problem with that. I can enumerate lots of problems with set theory as a foundational approach. The deepest one being, how can we possibly imagine that we're modeling the continuum with a discrete structure? That's a great philosophical question.

But I would like to better understand where you are coming from in terms of your specific objections.
User avatar
A_Seagull
Posts: 907
Joined: Thu Jun 05, 2014 11:09 pm

Re: Is 0.9999... really the same as 1?

Post by A_Seagull »

wtf wrote:
A_Seagull wrote: I am not at all opposed to reductionism. The claim for sets as the foundation to maths is not even reductionism. I don't believe that you can take the general principles of maths and reduce them set theory.
That's fine. There are many substantive arguments along those lines. Various flavors of intuitionism and structuralism criticize standard set theory. Have you got a particular alternative or criticism in mind?

Set theory is a historically contingent idea, only a century old. It may be out of fashion in a few decades. What of it? It certainly won the 20th century. You say you "don't believe." Can you elaborate? I'm not sure where you're coming from. I could say, "I don't believe in the atomic theory of matter," and it's not clear if I'm simply ignorant of 20th century physics or highly knowledgable and making a deeper argument.

What I meant was that I don't believe that you can take the general principle of mathematics and through logical analysis arrive at a foundation of sets, because I cannot see how the concept of 'sets' is contained within the general principles of mathematics. (Whereas I do believe that you can take the general principles of atoms and arrive at a theory of protons and electrons.)

A_Seagull wrote: Instead what has happened is that mathematicians have arbitrarily introduced the concept of sets
As I mentioned earlier, it wasn't exactly arbitrary. Sets were "in the air" at that time. Cantor's transfinite ordinals came directly out of his study of trigonometric series, which came directly from Fourier's study of the physical phenomenon of heat distribution. A guy named Paul du Bois-Reymond actually invented the technique of Cantor's diagonal argument before Cantor did. If Cantor had never lived, someone else would have discovered set theory. And the smartest people in the world, people like Frege, Russell, and Hilbert, jumped on board set theory as soon as they saw it. You can't say this was arbitrary, not with a lot more evidence or some line of argument. It was everything BUT arbitrary. The development of set theory was very natural.

Certainly the idea of collections is natural. If I have two collections I can combine them into one, or consider the set of objects that are in both collections (left-handed redheads). Set theory is a formalization of the natural idea of collections. I am very curious to understand why you object to set theory.

Again, I'd like to hear your argument, not just words like "I don't believe," and "arbitrary." What is the basis of your objections? Perhaps I can answer them. Or perhaps I'll agree with them.
A_Seagull wrote: and then proceeded to deduce something that resembles number theory from them. There is no inherent reason why it should be sets rather than some other theory that is used to deduce something that resembles number theory.
Absolutely correct. Have you heard of the Elementary Theory of the Category of Sets? (ETCS). This is a structuralist account of the natural numbers that uses category theory instead of set theory. Very interesting approach. Note that it doesn't yet have a Wikipedia page. It's cutting edge stuff.

But nobody claims that set theory is the "only" possible foundation for math. That's a strawman argument. Set theory is one foundation for math that won the 20th century and is still the standard foundation. It's not the only possible foundation nor is it without its philosophical and mathematical flaws and difficulties.
A_Seagull wrote: In order to introduce set theory into the system of maths, new axioms and processes of inference have to be introduced. As I see it this makes it all unduly complicated. Far more straightforward to simply introduce "0.9999999.... = 1" as an axiom. Job done! :)
No that would be a disaster. If math worked by saying, "Whatever theorem I want to prove, I'll just make it an axiom," then everyone would have their own list of axioms and nobody could communicate math to anyone else.

The beauty of set theory (and now category theory) is that we have a unified system of axioms that allow us to accurately talk about math with others. That's the point of foundations.

By the way Homotopy type theory (HoTT) is yet another contemporary alternative foundation.

So to sum this up, we have classical (20th century) objections to set theory along the lines of intuitionism and structuralism. We have contemporary alternative foundations such as ETCS and HoTT. And of course we have traditional old set theory, in which major developments are happening every day.

So if you don't like set theory, I have no problem with that. I can enumerate lots of problems with set theory as a foundational approach. The deepest one being, how can we possibly imagine that we're modeling the continuum with a discrete structure? That's a great philosophical question.

What continuum? All the evidence points to us living in a quantised world. And in any case I see no problem with a discrete structure modelling analogue data. For example a music cd does an excellent job.


But I would like to better understand where you are coming from in terms of your specific objections.
Its a bit of a long story... but here goes....

it starts with Hume's division of knowledge into the real and the abstract. Mathematics is on the abstract side.

The model for mathematics is then a collection of axioms and methods of inferences. Then from these axioms an abstract machine is constructed which embodies the axioms and which can then generate 'theorems'. The axioms chosen for the system can be somewhat arbitrary. The main requirement of the axioms is that they are consistent with each other to the degree that a machine can be constructed which embodies all the axioms and only those axioms. Also the machine must be capable of generating theorems. (otherwise it is effectively a null-system). The third requirement, albeit not a necessity, is that the theorems are 'interesting' to some degree. And certainly for the system of mathematics, the theorems are highly interesting.

It should be emphasised here that the theorems generated are nothing more than strings of abstract symbols that are without 'meaning'. (eg the theorem "2+2=4" is nothing more than an arrangement of the symbols "2", "+" and "4".)

The machine can be set to run on its own and generate all the possible theorems of mathematics, albeit not in a finite time. What happens in practice is that mathematicians and scientists operate the machine to focus on particular aspects of the system that interest them.

The operators of the machine can then take the theorems and 'map' them onto their concepts of the real world. So for example the symbol '1' would get mapped onto concepts associated with 'one' or 'oneness'; "=" would get mapped onto concepts of "equality" or "equivalence".

It is this useful mapping between the abstract and the real that makes the theorems of mathematics so interesting.

So in this model there is no need for any foundations of mathematics that go any deeper than its axioms.
wtf
Posts: 1178
Joined: Tue Sep 08, 2015 11:36 pm

Re: Is 0.9999... really the same as 1?

Post by wtf »

A_Seagull wrote:
What I meant was that I don't believe that you can take the general principle of mathematics and through logical analysis arrive at a foundation of sets, because I cannot see how the concept of 'sets' is contained within the general principles of mathematics. (Whereas I do believe that you can take the general principles of atoms and arrive at a theory of protons and electrons.)
It's a little confusing for you to reply to my words by putting your reply inside my quoted text in a different color. Extra work for me when I reply. What's wrong with old-fashioned quoting?

Regarding your point in blue, of course I am in full agreement. I have never in my life heard anyone argue that set theory is "is contained within the general principles of mathematics." We have strong evidence to the contrary:

* People have done math for thousands of years. Archimedes, Eudoxus, Gauss, Euler, and Newton were mathematical geniuses who never heard of set theory. Set theory is only a century old; and

* Alternative foundations, several of them, are already beginning their ascendency. In large parts of higher math, specifically algebra and geometry, Category theory has pretty much replaced set theory (although foundationally-minded category theorists do pay attention to which categories are sets and which aren't, and they have invented some clever patches to make the connection between category and set theory legitimate).

It seems clear to me that you are making a strawman argument, claiming that someone (who?) thinks that set theory is logically entailed by the rest of math, when in fact it is manifestly not.
A_Seagull wrote: What continuum? All the evidence points to us living in a quantised world. And in any case I see no problem with a discrete structure modelling analogue data. For example a music cd does an excellent job.
First, there may be evidence but it's in no way conclusive. All we know is that we can't measure below the Planck length. We don't know if that's a problem with measurement or a characteristic of the universe.

But whether the world is discrete or not, humans have an intuition of continuity and a thing called the continuum. A philosopher might well argue that the mathematical continuum is not necessarily the right model of the intuition of the continuum. An intuitionist would certainly make that argument and many of them have.

But I'm puzzled by something here. I tossed out that example (of the continuum) to GIVE YOU AMMUNITION for your own criticism of set theory! And here you are disagreeing with the very evidence of YOUR OWN THESIS that I'm trying to hand to you.

It makes me wonder why you did that.


A_Seagull wrote: it starts with Hume's division of knowledge into the real and the abstract. Mathematics is on the abstract side.
I'm not familiar with Hume, but the rest of what you wrote is quite mysterious. It is the direct opposite of how math works. I can't tell if you are saying this is Hume's account of math, or your account of how you think math IS done or SHOULD BE done. Regardless, what you wrote is so wrong I have to object in detail.
A_Seagull wrote: The model for mathematics is then a collection of axioms and methods of inferences.
But Hume lived (according to Wikipedia) from 1711-1776. He knew nothing of machines, set theory, or axiomoatic systems beyond Euclid. I'm a little confused as to whose views you are expressing here.
A_Seagull wrote: Then from these axioms an abstract machine is constructed which embodies the axioms and which can then generate 'theorems'.
I don't understand what it means for a machine that "embodies the axioms." Surely I could program the axioms of set theory into a computer, is that what you mean?
A_Seagull wrote: The axioms chosen for the system can be somewhat arbitrary.
The axioms of set theory are anything BUT arbitrary. The modern axioms are the result of a painful and difficult process that took place from say 1874, the publication of Cantor's first paper on set theory, to the 1920's or so as Zermelo's axioms came to be widely accepted. Each and every axiom was argued over and dissected by mathematicians and mathematical philosophers. Axioms need to be natural and embody some intuitive common sense notions such as unions and intersections.

It's true that we could make up any old axioms and play games with them. Chess is one such example. If the knight moved differently or the board was 9x9 instead of 8x8 we'd have a different game, and nobody would care.

But the axioms of math are not arbitrary, they are intended to encapsulate things that we think are true about the world of collections that we call sets.

You used the word "arbitrary" in your previous post and I strongly objected, and now you have repeated that claim without acknowledging that I say you are WRONG. The axioms of contemporary set theory are not arbitrary. They are the result of 40 years of intellectual struggle to pick the right axioms, and in fact set theorists are still looking for the right set of axioms. This fact alone contradicts your claima (repeated twice but never justified) that the axioms are arbitrary.

I apologize if I sound ranty but the axioms are simply not arbitrary. We can talk about this more if you like.

A_Seagull wrote: The main requirement of the axioms is that they are consistent
Since it's unknown whether the current axioms of set theory are consistent, I wonder where you got this idea. Gödel showed that the axioms of set theory can not prove their own consistency. The only way to show that they are consistent is to ASSUME stronger axioms, whose consistency is then a matter of question.
A_Seagull wrote: with each other to the degree that a machine can be constructed which embodies all the axioms and only those axioms.
Gödel demolished that hope. Although I don't know what you mean by "embodies all the axioms and only those axioms." That phrase doesn't make a lot of sense to me. But if you are saying that an axiom system should be "complete," which means that it any properly formed statement either can or can't be proven from the axioms, then Gödel showed that the ONLY complete axiomatic systems (of suffient power to do number theory) are the inconsistent ones!

A_Seagull wrote: Also the machine must be capable of generating theorems. (otherwise it is effectively a null-system).

The third requirement, albeit not a necessity, is that the theorems are 'interesting' to some degree. And certainly for the system of mathematics, the theorems are highly interesting.
Any interesting axiomatic system for math is either incomplete or inconsistent. You are aware of that, right? That's why machines don't do math. We do have early computer theorem proving systems and this is an interesting area of contemporary research. But the human mathematicans are still employed.
A_Seagull wrote: It should be emphasised here that the theorems generated are nothing more than strings of abstract symbols that are without 'meaning'. (eg the theorem "2+2=4" is nothing more than an arrangement of the symbols "2", "+" and "4".)
This is the formalist position. But then you haven't explained why the theorems should be "interesting." Why are the rules of math considered ontologically different than the rules of chess? If math were purely a formal game, people still might enjoy playing, but nobody would be arguing that math is central to the nature of the universe.

It do find is awfully strange that you, arguing for intuition, are actually arguing an extreme formalist position that even I don't agree with. Math is NOT just a formal game, that's why its philosophy is so interesting. Nobody claims the universe is made of chess, but Tegmark claims the universe is made of math. Why is that?
A_Seagull wrote: The machine can be set to run on its own and generate all the possible theorems of mathematics
Gödel thoroughly destroyed that claim in 1931. Nobody has found a flaw in his argument. On the contrary, we have found NEW proofs of the same fact via computer science (the halting problem) and algorithmic information theory (Chaitin's proof of Gödel's incompleteness theorem).
A_Seagull wrote: , albeit not in a finite time.
Now that is really and truly wrong. I hope by the way that you know I'm enjoying our conversation and not piling on in a personal manner. But what you wrote in this post is so contrary to the actual facts about math that I feel compelled to call out each and every misunderstanding, point by point.

It is part of the definition of a formal proof that a proof is a FINITE sequence of deductions. A formal proof could in theory take the age of the universe to write down, but that would still be a FINITE amount of time. I can not imagine what you think a proof is that would require infinitely much time to write down.

A_Seagull wrote: What happens in practice is that mathematicians and scientists operate the machine to focus on particular aspects of the system that interest them.
Absolutely and truly false. Math works in the opposite way. Mathematicians are trying to understand the nature of groups, or topological spaces, or differentiable manifolds, and other abstract mathematical objects (which always turn out to be useful to the physicists, contrary to the formalist position).

The pattern is that mathematicians often know what they want to prove, and THEN they write down the axioms that will get them there.

As a striking example, Wiles's proof of Fermat's last theorem is written in the language of modern algebraic geometry, whose axioms go BEYOND standard set theory. It is generally believed that "in principle" Wiles's proof does not actually require the extra axiom (known as the existence of a "Grothendieck universe"). However nobody has bothered to actually write down such a universe-free proof of FLT. Nobody cares about axioms when they're actually doing math. This is a point lost on many philosophers of math who are stuck in 1900 and who have not yet internalized the post-1950 revolution in the way algebra and geometry are done. Set theory's already dead as a practical matter. I can say a lot more about this if you like.

A simpler example is that Galois never heard of set theory, but he invented group theory in order to show why a 5th degree or higher polynomial doesn't have a formula like the quadratic formula. Mathematicians are not interested in axioms, they're interested in theorems.

In other words: The axioms follow from the theorems and not the other way 'round like they tell you in naive accounts of the philosophy of mathematics.

A_Seagull wrote: The operators of the machine can then take the theorems and 'map' them onto their concepts of the real world.
Really? Why do they do that if the theorems are meaningless? I've never seen anyone try to map the rules of chess to the real world, or the rules of Parcheesi, or the rules of Whist. But we apply math to the real world every day; and most math comes right out of physics at some level. Even set theory, as the example of Fourier and Cantor shows.

Your naive account of how math works is a strawman. Math doesn't actually work that way.
A_Seagull wrote: So for example the symbol '1' would get mapped onto concepts associated with 'one' or 'oneness'; "=" would get mapped onto concepts of "equality" or "equivalence".
How do we map the rules of chess? You are making a formalist argument and then completely contradicting your own position. I hope you don't regard it as unfair of me to point that out. If the theorems are meaningless, why do we think they map onto the real world? If they're meaningful, what makes them so? Perhaps it's the NATURALNESS of the axioms and not their supposed arbitrariness.
A_Seagull wrote: It is this useful mapping between the abstract and the real that makes the theorems of mathematics so interesting.
Like I say, you are making both a formalist and a Platonist argument at the same time. I apologize if my post sounds like I'm beating up on your words but you are not stating any kind of coherent position here.
A_Seagull wrote: So in this model there is no need for any foundations of mathematics that go any deeper than its axioms.
So what's your object to the axioms then????????? You just argued yourself out of your own position!! You started out by saying you object to basing math on the axioms of set theory; and you ended up by saying that there's no need for any foundation besides the axioms!

tl;dr: Your entire post is a strawman argument describing a theory of math that has absolutely nothing to do with math. And in the end, you argued yourself out of your stated position.
wtf
Posts: 1178
Joined: Tue Sep 08, 2015 11:36 pm

Re: Is 0.9999... really the same as 1?

Post by wtf »

ps -- The preceding sounded excessively ranty to me. I wanted to explain where I'm coming from, separately from what I wrote above. Here is my thesis.

When philosophers, amateur or professional, learn about the formal theory of mathematical proof as it was developed from the 1870's through the 2020's, and let's say culminating with Gödel, they come away with a particular impression.

They seem to believe that Math has turned out to be formalizable with some axioms; and the work of the mathematician is to turn the crank on the proof machine and shape the ground beef into patties.

As it happens this is entirely wrong, totally wrong. And I know this view is held by many esteemed and brilliant philosophers. They're just wrong.

In short, philosophers are "stuck in 1900."

So A_Seagull, I just want to say that you did an admirable job of expressing a certain point of view; that happens to be widely believed; but that is wrong. I'm pushing back on this philosophical error, not on you personally.

Math has changed. There has been a modern revolution in math that started around 1950, give or take. What modern mathematicians do is unrecognizable as any form or extension or exercise in set theory. It's all category theory. Diagram chasing. It's not like any math anyone has ever seen.

Moreover it's based on an entirely different set of philosophical principles than set theory. There has been a structuralist revolution in math and the philosophers haven't heard about it.

We no longer care about sets as collections of elements. We have in fact done away entirely with the concept of elements. Modern math is expressed purely in terms of the relationships among objects and does not care about their inner nature. Thus we do away entirely with the idea that the number 3 is a set containing 0, 1, and 2 -- which it manifestly isn't!

In other words A_Seagull you are entirely correct. We don't really think numbers are sets anymore. We no longer think objects have little elements in them at all. Rather, objects have relationships with other objects; and their internal structure is opaque to us. A function is an arrow between two objects. It's no longer a machine that maps one element to another. That entire point of view is gone from modern math.

I am overstating the case a little. This particular revolution is mostly in algebra and geometry. Not so much analysis. It's also greatly impacted logic. Look up toposes. I don't know much about them myself but evidently the category theorists have revolutionized logic and we can express intuitionist logic and classical logic as two special cases of something more general.

So the bottom line is that I did push back hard on this misconception philosophers have about math, which you expressed very well. But I wasn't really yelling at everything you wrote one sentence at a time. I was yelling at the philosophers who taught you that stuff.
Last edited by wtf on Wed Mar 08, 2017 3:39 am, edited 4 times in total.
Philosophy Explorer
Posts: 5621
Joined: Sun Aug 31, 2014 7:39 am

Re: Is 0.9999... really the same as 1?

Post by Philosophy Explorer »

wtf wrote:ps -- The preceding sounded excessively ranty to me. I wanted to explain where I'm coming from, separately from what I wrote above. Here is my thesis.

When philosophers, amateur or professional, learn about the formal theory of mathematical proof as it was developed from the 1870's through the 2020's, and let's say culminating with Gödel.

Now -- and this is only an impression or opinion of mine, I have no scholarly research to back up what I'm about to say -- a lot of philosophers, amateur and professional, have come away with the impression; or been taught; or have read online somewhere; the opinion that Math has turned out to be formalizable with some axioms; and the work of the mathematician is to turn the crank on the proof machine.

As it happens this is entirely wrong, totally wrong. And I know this view is held by many esteemed and brilliant philosophers. They're just wrong.

In short, philosophers are "stuck in 1900."

However, math has changed. In fact there has been a modern revolution in math that started around 1950, give or take. What modern mathematicians do is unrecognizable as any form or extension or exercise in set theory. It's all category theory. Diagram chasing. It's not like any math anyone has ever seen.

Moreover it's based on an entirely different set of philosophical principles than set theory. There has been a structuralist revolution in math and the philosophers haven't heard about it.

We no longer care about sets as collections of elements. We have in fact done away entirely with the concept of elements. Modern math is expressed purely in terms of the relationships among objects and does not care about their inner nature. Thus we do away entirely with the idea that the number 3 is a set ... which it manifestly isn't!

In other words ... A_Seagull I think the mathematicians agree with you. Set theory's not really the foundation of modern math as practiced by professionals. Or to be more clear: Set theory has tremendous mindshare, traction, and inertia. So it's still the language of math at some basic level. But the philosophy that goes along with set theory -- that objects are composed of elements -- that's gone away. Now we just draw objects and the arrows between them. We have the functions but they no longer "act on elements." Honestly, there has been a humongous revolution in math that the public doesn't know about yet.

So A_Seagull in your post, you did an excellent job of representing a view of math that has been taught to you as being correct and wise. As it happens, it's wrong. When philosophers learn what's been going on in math since 1950, they are really going to be surprised.

That's where I am coming from, for what it's worth. I definitely was not attacking the things you wrote. I was attacking the things you heard from someone else, believing them to be true. Your ideas are wrong but you came by them honestly. Much of the philosophical profession is wrong about the nature of math.
Here's a good question to ask. There are different foundations proposed for mathematics. Which one would you support (if any?)

PhilX
Post Reply