Speakpigeon wrote: ↑Sat Mar 02, 2019 9:56 pm

Another way to understand what the programme proposed initially by our little cheat here really does is to express it as a logical argument. Something he refused to do himself even though I asked him to do it so we could understand.

So here it is:

Premise 1 - A is identical to A;

Premise 2 - A is not identical to A;

Therefore, A is identical to A and A is not identical to A.

This argument is trivially valid in mathematical logic just because the two premises are contradictory to begin with. Garbage in, garbage out.

Such arguments, although valid, cannot possibly be sound precisely because the premises are contradictory, which means that one of the two premises has to be false and the argument therefore unsound.

It should be noted that this aspect of mathematical logic is well known since around 1900 and Bertrand Russell and even before that it's been proposed by William of Soisson in the 12th century. So, no big news...

It's been dubbed "

the principle of explosion":

The principle of explosion

The principle of explosion (Latin: ex falso (sequitur) quodlibet (EFQ), "from falsehood, anything (follows)", or ex contradictione (sequitur) quodlibet (ECQ), "from contradiction, anything (follows)"), or the principle of Pseudo-Scotus, is the law of classical logic, intuitionistic logic and similar logical systems, according to which any statement can be proven from a contradiction. That is, once a contradiction has been asserted, any proposition (including their negations) can be inferred from it. This is known as deductive explosion. The proof of this principle was first given by 12th century French philosopher William of Soissons.

https://en.wikipedia.org/wiki/Principle_of_explosion

The idea that it would falsify the Law of Identity is simply idiotic.

Also, this shows the guy is an ignoramus and doesn't understand at all what the Law of Identity means.

EB

The Mathematical/Aristotelian fraternity currently draws a distinction between the notions of provability and decidability.

Proof is strictly about deduction. Decidability a.k.a induction is ignored as a concern of Mathematics.

All 'proofs' of **x = x** are axiomatic from the Classic law of identity (assumed).

Even the Mathematical theorem prover Coq behaves this way.

From the Curry-Howard isomorphism proofs are isomorphic to computer programs:

https://en.wikipedia.org/wiki/Curry%E2% ... espondence
In order to get to where I am I have junked Set Theory and started with Type Theory as fundamental to all Mathematics.

https://en.wikipedia.org/wiki/Type_theory
In the universe of Type Theory (The Human universe) 1 is not anything in particular. 1 is just an abstract symbol. Symbols are just language. They can mean anything we want them to mean. If you subscribe to the decimal system 1 + 1 = 2, If you subscribe to the binary system 1+1 = 10.

And so it is a fundamentally important question to ask "We understand what 1's value is - it's 1! But what is its identity?"

In pursuit of the answer my foundational alphabet is the characters (

NOT INTEGERS) 1, 2, 3, 4, 5, 6,7, 8, 9, 0.

And the goal is to DERRIVE the digits and integers from them.

In order to derive the digits we need to PROVE (computationally) that the string(1) = digit(1), string(2) = digit(2) etc etc...

Since our alphabet is finite, in the abstract, and when given infinite amount of time and memory a Turing Machine will indeed determine that for all x: x = x => True.

Even for infinite values of x. BECAUSE Abstract turing machines are infinite concepts. But we, humans, don't live in the abstract , infinite universe of Mathematics.

We live in a physical reality where computation requires non-zero amount of energy, and non-zero number of operations in order to decide on the truth-value of a proposition. Even a proposition as simple as 'x = x' needs to be computed/decided by SOMETHING. Usually a human mind.

To conclude it axiomatically from Classical identity is to pre-suppose truth!

There are Truths which cannot be deduced directly from the theorems.

What does 1 mean?!? To a Mathematician/Aristotelian - nothing. To a scientist - everything!

It's information. Information is how we make decisions. Information is how we test hypotheses. Information is how we falsify hypotheses.

Information is experience. Information is fundamental to human thought!

Classical logic conflates the notions of **identity** and **value**.

On a Turing machine **Identity** means unique memory address. Lets call it M.

**Value** means contents-of-memory at location M. Lets call it VALUE(M).

### Drawing a distinction between identity and value

Identity := id(x)

Value := value(x)

P1. for all x: id(x) == value(x) => False

The identity of an object is not the same as its value. We will use Big-O notation to quantify 'value'

https://en.wikipedia.org/wiki/Big_O_notation
For those unfamiliar with algorithmic analysis and Big-O notation, suffice to say that it is synonymous with the English notion of complexity.

A simple thing is O(1). An infinitely complex thing is O(∞). Complexity is a function of value.

The more information an object contains - the more complex it is.

### The law of identity in a Digital Physics universe.

P2.A for all x: id(x) == id(x) => True

Each discernable object in physical reality has a universally unique identity.

The computational cost of the id(x) function is O(1).

Telling that two things are not the same is trivial! That is literally why we say 'two things'.

### The law of value in the Digital Physics universe (Classical law of identity)

The more information an object contains - the harder it is to decide its value. The harder it becomes to prove "x == x".

The cost of the decision is O(1) for small values of X (e.g really simple objects) and O(∞) for infinite values of X e.g really complex objects.

P3. for all x: x == x => UNDECIDABLE, Best case: O(1), Worse case: O(∞)

Aristotelian Identity is the principle of explosion - camouflaged!

Aristotelians don't know what identity is and mistake identity for value.

The Classical law of identity is an illusion. It causes us to mistake the complex for the simple. Behind a simple statement like "x = x" hides infinite amount of complexity. This very fact prevents us from saying anything useful about the integers beyond phenomenology. If I can't recognize an integer for what it is then I am stuck in a dream.

https://repl.it/@LogikLogicus/INTEGERS
Next step: proving that the integers are finite.