No. I am most definitely not doing that. I am asking you to DO WORK in the physics sense of the word.Scott Mayers wrote: ↑Tue Feb 26, 2019 1:21 am ....you are asking about the syntactical symbols literally and to how we read them.
I am asking you to compare two things and determine IF they are "the same".
I am asking you to do what every philosopher takes for granted (that it is trivial to understand the meaning of "=")
I am asking you to do what any programmer recognizes as the mechanical process of sorting which could be trivial OR infinitely complex.
And any statistician recognizes as binary classification: https://en.wikipedia.org/wiki/Binary_classification
Given a urn full of black and white balls I am asking you to sort the balls into two groups. One set of white balls and one set of black balls.
This is a trivial task for us when A and B are well contrasted and clearly juxtaposed.
It becomes exponentially harder and harder task as the contrast between A and B falls outside of sensitivity range of your measurement apparatus (in this case - your eyeballs) and there comes a point when your equipment is not precise enough and A = B becomes A = A.
Now OBVIOUSLY you can tell that two balls are TWO different objects (they have unique spacetime coordinates), but you can't tell if one red ball is 630 nanometers red and another ball is 631 nanometers red.
I am merely using symbols because it's easy to make the point using this particular medium of communication. If we were talking in person - I would probably use a different example.
What I am trying to demonstrate to you is how fundamental information, decidability and hypothesis testing are to human reasoning.
IF you can detect a difference between A and A THEN A = A is false.
IF you can't detect a difference between A and A THEN A = A is true.
And this will short-circuit your brain because while you are SEEING two symbols which LOOK identical, they are NOT identical.
This is the process in your brain which sorts things into categories. Cats vs Dogs. Black vs White. Red vs Green.
Good vs Evil. Better vs worse. True vs False. 1 vs 0.
It's pattern recognition - something our brains are amazing at and computers, not so (yet).
In order for you to be able to classify ANYTHING into categories you need what is called a "Classification rule": https://en.wikipedia.org/wiki/Classification_rule
Observe that even though we are trying to SORT things into two categories (say true and false) there are actually 4 possible outcomes here.
In addition to true and false conclusions we also have true positives and false negative. In English: You have put a black ball in the white pile, or a white ball in the black pile e.g an error.
This becomes exponentially harder when you have to sort things into 200 bins and the number of possible errors grows exponentially too!
No, what I mean is that "A" is a different pattern of information to "A". They only LOOK the same but they are not the same.Scott Mayers wrote: ↑Tue Feb 26, 2019 1:21 am This requires separating the language we use to discuss some logic from the actual logic itself. What you are thinking is that the fact that we read "A = A" linearly, that what we understand them to mean within the system is unable to be true out of some practical limitations of communicating it, but implying that our lack of this ability imposes something intrinsically true about the meaning of the logical postulates.
What I want you to pay attention to is the fact that while you are DOING WORK (e.g SORTING) you are not SPEAKING.
You are THINKING. The process of sorting things is ENTIRELY mechanical.
And if nobody ever asked you to explain the classification rule you will NEVER have to narrate those thoughts.
So how can you possibly conceptualise ANY undecidable "logic" as the "laws of" reason when undecidable logics most definitely cannot DO sorting?!?
Yeah, but Shannon turned everything on its head with Information theory. And you arrive squarely in the realm of Science and statistical hypothesis testing.Scott Mayers wrote: ↑Tue Feb 26, 2019 1:21 am Turing only used an idealized computer system in thought to convey the limitations about logic itself, not about physical computers.
Hypothesis 1: A = A is true.
Hypothesis 2: A = A is false
What experiment would you perform to determine which hypothesis is the valid one?
Or as a scientist would say "Can you give me a procedure by which you would distinguish the two cases?"
Here is my procedure: https://repl.it/repls/ShortLightgrayPiracy
If you can't conceive of an actual procedure, and YET you somehow conclude that A = A is true (because your brain is magical and gives you the right answers), you have to allow for the possibility that this is a false positive!
You have to allow for the possibility of error.
Hah! No. You have it backwards. CONCEPTUAL Turing Machines are Universal - given infinite time, infinite memory and infinite energy they can solve ANY problem via mere brute-forcing.Scott Mayers wrote: ↑Tue Feb 26, 2019 1:21 am The utility of his analogy using an imaginary constructs and architecture that we today call a 'computer' is only coincidentally significant as to how it applies to actual physical computers. We call them "Turing" machines because of his unique 'design' of a simplified computer rather than to other optional designs that actual computers ended up using in reality.
It is only when you bring limits (physics!) into the mix is where we begin to discuss limits of computation.
Moot. We have generalized computation beyond Turing's initial work.Scott Mayers wrote: ↑Tue Feb 26, 2019 1:21 am For instance, he treated the programs initially as lists of separate instructions that a person has to take data from a single tape, read, interpret, and alter if instructed, and then to print the outcome back onto the tape. Those are 'non-universal'. His 'universal' machine was one that has a fixed kind of BIOS hardware design that takes the first data off the tape to DEFINE a the program virtually without a need to reconfigure the hardware.
We have Universalized computation.
Here's a much better definition of a computer: a machine which manipulates information.
What is information? The thing which allows you to decide between these two hypotheses:
Hypothesis 1: A = A is true.
Hypothesis 2: A = A is false
Bayesian inference.