Is there a sentence that proves itself is not provable?

What is the basis for reason? And mathematics?

Moderators: AMod, iMod

Scott Mayers
Posts: 2446
Joined: Wed Jul 08, 2015 1:53 am

Re: Is there a sentence that proves itself is not provable?

Post by Scott Mayers »

PeteOlcott wrote: Thu Apr 04, 2019 6:26 am
Scott Mayers wrote: Thu Apr 04, 2019 6:21 am
I appreciate you updating your approach in order to communicate this better. So I'll be patient but still believe that you likely have mistook the interpretations of the theorems. Note I opened a distinct thread to discuss historical aspects of the topic. It may help you there to experiment communicating some of the background from your understanding for others so that ''we' can hopefully come to common ground on this particular argument you have. I initially proposed an earlier related problem about the limitations of rational numbers to the real numbers which may help to express the significance of these theorems. While I may disagree with your position up front, maybe I might at least understand better what you mean in time. Good luck and keep up your effort to express it better.
I haven't spent 10,000 hours over 22 years on a mistake. You have to actually read my two page paper and see what I said.
This only tells me that you're overly invested. There is a psychological concept called, PROJECTION, which means that the more you invest in something, the more valuable it is to you REGARDLESS of what evidence is presented that might counter it. When we invest in someone we love, we become less willing to give them up should they turn us down because we interpret the loss as meaning we've wasted all our investment in the object of our desire without satisfaction.

You need to bear this in mind as it is also what drove many prior theorists on this subject to suicide.

May I ask you if there is any possible counterproof to your theory?
PeteOlcott
Posts: 1514
Joined: Mon Jul 25, 2016 6:55 pm

Re: Is there a sentence that proves itself is not provable?

Post by PeteOlcott »

Scott Mayers wrote: Thu Apr 04, 2019 6:34 am
PeteOlcott wrote:We derive these three universal Truth predicate axioms:
(1) ∀F ∈ Formal_Systems ∀x ∈ WFF(F) (True(F, x) ↔ (F ⊢ x))
(2) ∀F ∈ Formal_Systems ∀x ∈ WFF(F) (False(F, x) ↔ (F ⊢ ~x))
(3) ∀F ∈ Formal_Systems ∀x ∈ WFF(F) (~True(F, x) ↔ ~(F ⊢ x)
A "well-formed-formula" is just a syntactical convention for a system. For the non-initiated, this means nothing but looks confusing without an explanation.

The concept of a 'sequent' using the symbol, "⊢" with something to the left of is, means 'given' the list of inputs, what follows the "⊢" is the conclusion. For a 'theorem', this requires representing ONLY the conclusion to demonstrate its tautological nature. Thus, for example,

⊢ P or not-P


means, "it is a theorem that P or not-P is true in the understood system." The way you expressed these are not universal truths but conditional truths.
A "well-formed-formula" is just a syntactical convention for a system.
Meaning that the expression is correctly parsed by this syntax:
https://www.researchgate.net/publicatio ... y_YACC_BNF

In the above case (F ⊢ x) means that x is a theorem of F.
When the first two Truth predicate axioms are false then
the third one detects and reports Semantically-ill-formed(x).
Scott Mayers
Posts: 2446
Joined: Wed Jul 08, 2015 1:53 am

Re: Is there a sentence that proves itself is not provable?

Post by Scott Mayers »

PeteOlcott wrote: Thu Apr 04, 2019 6:47 am
Scott Mayers wrote: Thu Apr 04, 2019 6:34 am
PeteOlcott wrote:We derive these three universal Truth predicate axioms:
(1) ∀F ∈ Formal_Systems ∀x ∈ WFF(F) (True(F, x) ↔ (F ⊢ x))
(2) ∀F ∈ Formal_Systems ∀x ∈ WFF(F) (False(F, x) ↔ (F ⊢ ~x))
(3) ∀F ∈ Formal_Systems ∀x ∈ WFF(F) (~True(F, x) ↔ ~(F ⊢ x)
A "well-formed-formula" is just a syntactical convention for a system. For the non-initiated, this means nothing but looks confusing without an explanation.

The concept of a 'sequent' using the symbol, "⊢" with something to the left of is, means 'given' the list of inputs, what follows the "⊢" is the conclusion. For a 'theorem', this requires representing ONLY the conclusion to demonstrate its tautological nature. Thus, for example,

⊢ P or not-P


means, "it is a theorem that P or not-P is true in the understood system." The way you expressed these are not universal truths but conditional truths.
A "well-formed-formula" is just a syntactical convention for a system.
Meaning that the expression is correctly parsed by this syntax:
https://www.researchgate.net/publicatio ... y_YACC_BNF

In the above case (F ⊢ x) means that x is a theorem of F.
When the first two Truth predicate axioms are false then
the third one detects and reports Semantically-ill-formed(x).
This is 'conditional' though. That IF "F" is the system used, then x is a consequent BY DEFINITION of the system's axioms.

"Truth" is distinct from "validity" though. So what is more accurate to say is that the system defines a "valid truth of the system" as being an axiom of it. It can't speak of 'truth' as defined outside of the system. So you are already confusing the 'value' of the system's fitness of itself as comparable to reality as a whole. The theorems of incompleteness are about the comparison of a system's capacity to represent accuracy of what is represented in reality.

The theorems are still based upon the meaning of 'formal systems', which are those that HAVE to be 'consistent'. What the incompleteness theorems do is to express that reality cannot be completely expressed using some calculator because the tool itself is only a subset of the reality it is measuring. I can have a hand-held calculator as a tool to sum up the prices of products I am interested in potentially purchasing so that I can know ahead of time whether I have the money to pay for them. But I can't use the calculator to DECIDE what I WILL buy with my limited funds.

I CAN add a programmed calculator that helps me to DECIDE, but it still doesn't assure that I would take the calculator's conclusion about what I should purchase. This intuitively means that my own processes in mind are not bound to the conclusions of the calculator's 'opinion'. (its conclusion, that is)

What IS more closed (complete), is a system that incorporates ALL possibilities that I might think of. But then some such 'calculator' has to be inclusive of all possible decisions. Such a calculator could then only say something like, I will buy X or some non-X. This is 'completely' exhaustive of all possibilities but is no longer 'functional', meaning that for every input there is a unique output conclusion.

If reality offers us 'choices' of more than one possible conclusion, then no formal system, based on a FIXED unique result expected, can mean anything. For instance, you might have a computer that concludes, "buy the products you pick or do not buy the products you pick" where it tests whether the money you have can cover all costs. But this kind of conclusion that such a calculator can suggest doesn't tell you that you are 'wrong' for selecting to buy what you have or not because both answers are equally true if the costs of whatever you have 'fit' to the money you have at most in your pocket.

Thus we need a newer 'computer' that can measure more details about what is necessary to help decide which is the better purchase choices. However, if reality contains ALL options somewhere in totality, then no mechanical process will ASSURE you what decision you WILL accept. If you CAN find such a perfect machine, then it would assure your action by its conclusion without confusion.

Thus the theorems were only about asserting that no FORMAL system is sufficient to determine a unique conclusion in reality because reality is itself more inclusive of the optional outcomes than a calculator can suggest OR that IF the conclusions are completely exhaustive of what you will decide to do, there are conclusions that merely state the optional conclusions but not which one is decisively 'better'.

What you want to do is to try to find some 'determinate' machine that can assure you act upon the conclusions no matter what. While this is true of many possible realities, it is not true of all possible realities. This makes such an 'ideal' machine impossible to solve all problems in a strictly UNIQUE way.
Scott Mayers
Posts: 2446
Joined: Wed Jul 08, 2015 1:53 am

Re: Is there a sentence that proves itself is not provable?

Post by Scott Mayers »

Reality as a whole IS 'determinate' with respect to totality. But if totality COVERS all possible options in some world, While we may not be able to experience all such real options locally, our totality can be 'complete'. We just can't locally determine our own fate where multiple possible options exist.

Locally we are 'indeterminate' and so makes things like 'free choice' UNDECIDABLE in any one particular universe. On the whole though, reality IS 'determinable' but is beyond the capacity of one single machine to determine within a specific universe to determine with precision.

Thus, the 'incompleteness' theorems are not true on the scale of the Cosmos of all possible worlds, but true in any given Universe, like ours.
Logik
Posts: 4041
Joined: Tue Dec 04, 2018 12:48 pm

Re: Is there a sentence that proves itself is not provable?

Post by Logik »

Scott Mayers wrote: Thu Apr 04, 2019 4:21 am (A → B) is identical to (not-A or B). In essence, the premises are either false and B either true or false, or B necessarily follows when A is true.
Thanks. In which case Pete's challenge is trivial to satisfy.
PeteOlcott wrote: Fri Mar 29, 2019 1:51 am Can this: ∃F∃G(G ↔ ~(F ⊢ G)) be shown to be satisfiable without Gödel numbers ?
Yes it can. Like this.

https://repl.it/repls/GigaDarkkhakiDebugging

Code: Select all

def implies(a,b)
  return ( (not a) or b ) 
end

def f; return true; end 

$toggle = true
def g; $toggle = (not $toggle ); end

g == (not implies(f,g) )
=> true
No Godel numbers. Just a temporal object a.k.a an oscillator.
PeteOlcott
Posts: 1514
Joined: Mon Jul 25, 2016 6:55 pm

Re: Is there a sentence that proves itself is not provable?

Post by PeteOlcott »

Scott Mayers wrote: Thu Apr 04, 2019 7:33 am
PeteOlcott wrote: Thu Apr 04, 2019 6:47 am
Scott Mayers wrote: Thu Apr 04, 2019 6:34 am

A "well-formed-formula" is just a syntactical convention for a system. For the non-initiated, this means nothing but looks confusing without an explanation.

The concept of a 'sequent' using the symbol, "⊢" with something to the left of is, means 'given' the list of inputs, what follows the "⊢" is the conclusion. For a 'theorem', this requires representing ONLY the conclusion to demonstrate its tautological nature. Thus, for example,

⊢ P or not-P


means, "it is a theorem that P or not-P is true in the understood system." The way you expressed these are not universal truths but conditional truths.
A "well-formed-formula" is just a syntactical convention for a system.
Meaning that the expression is correctly parsed by this syntax:
https://www.researchgate.net/publicatio ... y_YACC_BNF

In the above case (F ⊢ x) means that x is a theorem of F.
When the first two Truth predicate axioms are false then
the third one detects and reports Semantically-ill-formed(x).
This is 'conditional' though. That IF "F" is the system used, then x is a consequent BY DEFINITION of the system's axioms.

"Truth" is distinct from "validity" though. So what is more accurate to say is that the system defines a "valid truth of the system" as being an axiom of it. It can't speak of 'truth' as defined outside of the system. So you are already confusing the 'value' of the system's fitness of itself as comparable to reality as a whole. The theorems of incompleteness are about the comparison of a system's capacity to represent accuracy of what is represented in reality.

The theorems are still based upon the meaning of 'formal systems', which are those that HAVE to be 'consistent'. What the incompleteness theorems do is to express that reality cannot be completely expressed using some calculator because the tool itself is only a subset of the reality it is measuring. I can have a hand-held calculator as a tool to sum up the prices of products I am interested in potentially purchasing so that I can know ahead of time whether I have the money to pay for them. But I can't use the calculator to DECIDE what I WILL buy with my limited funds.

I CAN add a programmed calculator that helps me to DECIDE, but it still doesn't assure that I would take the calculator's conclusion about what I should purchase. This intuitively means that my own processes in mind are not bound to the conclusions of the calculator's 'opinion'. (its conclusion, that is)

What IS more closed (complete), is a system that incorporates ALL possibilities that I might think of. But then some such 'calculator' has to be inclusive of all possible decisions. Such a calculator could then only say something like, I will buy X or some non-X. This is 'completely' exhaustive of all possibilities but is no longer 'functional', meaning that for every input there is a unique output conclusion.

If reality offers us 'choices' of more than one possible conclusion, then no formal system, based on a FIXED unique result expected, can mean anything. For instance, you might have a computer that concludes, "buy the products you pick or do not buy the products you pick" where it tests whether the money you have can cover all costs. But this kind of conclusion that such a calculator can suggest doesn't tell you that you are 'wrong' for selecting to buy what you have or not because both answers are equally true if the costs of whatever you have 'fit' to the money you have at most in your pocket.

Thus we need a newer 'computer' that can measure more details about what is necessary to help decide which is the better purchase choices. However, if reality contains ALL options somewhere in totality, then no mechanical process will ASSURE you what decision you WILL accept. If you CAN find such a perfect machine, then it would assure your action by its conclusion without confusion.

Thus the theorems were only about asserting that no FORMAL system is sufficient to determine a unique conclusion in reality because reality is itself more inclusive of the optional outcomes than a calculator can suggest OR that IF the conclusions are completely exhaustive of what you will decide to do, there are conclusions that merely state the optional conclusions but not which one is decisively 'better'.

What you want to do is to try to find some 'determinate' machine that can assure you act upon the conclusions no matter what. While this is true of many possible realities, it is not true of all possible realities. This makes such an 'ideal' machine impossible to solve all problems in a strictly UNIQUE way.
You are conflating some things there. I will try to boil it down much simpler.
If we define truth as derived on the basis of some set of expressions of language
in the same sort of way that theorems are derived from axioms using rules-of-inference
and the same way as true conclusions are derived from valid deduction from true premises

Then it could not be possible for an expression of language G to assert that
it is not provable in F and have this assertion be proven in F, thus true in F.
Scott Mayers
Posts: 2446
Joined: Wed Jul 08, 2015 1:53 am

Re: Is there a sentence that proves itself is not provable?

Post by Scott Mayers »

PeteOlcott wrote: Fri Apr 05, 2019 2:55 am You are conflating some things there. I will try to boil it down much simpler.
If we define truth as derived on the basis of some set of expressions of language
in the same sort of way that theorems are derived from axioms using rules-of-inference
and the same way as true conclusions are derived from valid deduction from true premises

Then it could not be possible for an expression of language G to assert that
it is not provable in F and have this assertion be proven in F, thus true in F.
Well this response only makes me more confused, not less.

I begun a thread that is unlikely being read to address the historical considerations involving "incompleteness". Originally, rational numbers were thought sufficient to define all numbers. But once the Pythagoras Theorem was proven, say using Euclid's system of reasoning, the hypotenuse was deemed, "unexpressible" by a rational expression.

This just implied that the system of real numbers must be expanded for being understood as "incomplete" without doing so. A newer system of reasoning can be added to the prior logic of Euclid and any theories of Number it assumed in extension of its postulates (common notions, that is). The question was whether we could have a universal system of reasoning that links all mathematics under one universal system?

We can PRETEND that there IS such a universal logic system, say 'G', and ask if we can show it to lead to a contradiction. However, this assumes some apriori assumptions about the system we permit to be useful to judge this. For this simple question, we already would be assuming at least two factors in this very challenge: that we can "pretend", and that "contradiction" is something we think is not permitted in our judging logic system.

We MAY try to be more inclusive by permitting a system that judges without bias to any apriori conditions. But such a system is itself "trivial", literally meaning, 3-values for semantic meaning. In other words, it is already understood that there can be universal machine (system logic) that covers all possible logic systems under one umbrella. But it would require an acceptance of "truth" to be relative and 'fuzzy' with respect to totality. This would also remove the meaning of seeking for some universal logic system because all systems would be considered logical no matter what in the most inclusive way.

So we have to pick a judging logic that is based upon the limited definitions of a 'finite' world: those that allow us to 'pretend' (a rule about assumptions) and have a system devoid of contradictions that lack closure. If we permit a system that lacks closure (completeness), we cannot have any reason to pose the question, "is there such a universal logic" or we'd be expanding the meaning of "logic" to include irrational systems of reasoning. [Note how I'm bringing back the word, "irrational", in context to the historical root of the word as well?]

So the first step is to find the simplest systems of reasoning that is closed with respect to a finite domain. We then want to pretend that this system is itself a qualifying 'judge' about other systems. Certainly if there IS a universal system based on its own minimal limits, then there has to be some truth about it being subject to that universal system itself.

Then we use this 'judging' logic, knowing it is complete, and then PRETEND that there is a universal system. IF this very system you are using is itself only a subset of a complete universal system, then we assume it complete as 'proven' by the meaning of "completeness" and then try to see if we can extend this calculating system to ask possible questions about some larger domain OUTSIDE of it's defined one.

When we PRETEND that there is a goal we can reach outside of the minimal rules of logic, if we find a contradiction, we must treat the contradiction as proof that what we pretended is either NOT true OR there is some newer system of reasoning that requires extending the minimum laws about things like contradiction or pretenses.

Using this system, with the assumption that contradiction is not allowed, leaves us into a contradiction of the judging system. This means that either the 'judging system' is flawed or at least, "incomplete".

So PRETEND that you are correct, ...that any Incompleteness Theorems are false. You have some system 'judging' this to be true on what minimal logical assumptions? If you permit contradiction in your system, you are always certain to be 'proved' correct because contradiction allows for more than two truth values, 'true', versus 'false'. Your ARE asserting the theorems 'false' with exclusive distinction are you not? If not, you would at best be saying they are both 'true' and 'false'. If you don't permit something to be both, then you default to a LIMITED system of logic as already understood by those making those theorems. That is, they already assumed non-contradiction as a primary limitation. And this further means that they, like you, require treating "truth" about these systems as BINARY, not tri-vial.

Using your system, you'd require to show that your own reasoning is itself binary in truth values or you are defaulting to assume your system to allow for contradiction and you'd be hypocritical to use it to demonstrate that an external rationale is at fault for assuming non-contradiction systems complete or not. This is something you do not seem to care to do as you rely on an assumption of external machines to do this for you (like the Lambda calculus or Prolog, etc).

The point is that the "theorems" used to show incompleteness are supposed to end in their own contradiction. This way you can ask if that 'judging system' itself using its minimal rule of non-contradiction is sufficient as a PARENT logic to all systems it can build upon it. (?) So either "G" is contradictory or not. If proven contradictory when we PRETEND it is the most sufficient machine to exhaustively cover all domains, then it means either that system is wrong or that it is correct but means there will always be some domain outside of it to which some other MORE complete logic is constructed from.

The theorems stand true precisely for your own apparent intent to be non-contradictory and yet be unable to do so without permitting all systems that include at least a third truth value.

This is where I interpret contradiction itself as a foundational reality to everything. It IS a 'trivial' system that implies everything possible in some greatest domain, I call "Totality". It rationally permits "contradiction" to be a force that constantly tries to BE consistent when it can only do so at the cost of becoming 'inconsistent' for trying. Thus a 'universal logic' can exist, but then reduces the meaning of such a 'logic' as inclusive of those things we think of as 'non-logical'. So we have to stick with some fixed limitation of the term 'logical' by some stricter subset of all possibilities or we lose meaning of 'logical' for nothing to be 'illogical'.
PeteOlcott
Posts: 1514
Joined: Mon Jul 25, 2016 6:55 pm

Re: Is there a sentence that proves itself is not provable?

Post by PeteOlcott »

Too verbose
PeteOlcott
Posts: 1514
Joined: Mon Jul 25, 2016 6:55 pm

Re: Is there a sentence that proves itself is not provable?

Post by PeteOlcott »

Scott Mayers wrote: Fri Apr 05, 2019 9:36 pm
PeteOlcott wrote: Fri Apr 05, 2019 2:55 am You are conflating some things there. I will try to boil it down much simpler.
If we define truth as derived on the basis of some set of expressions of language
in the same sort of way that theorems are derived from axioms using rules-of-inference
and the same way as true conclusions are derived from valid deduction from true premises

Then it could not be possible for an expression of language G to assert that
it is not provable in F and have this assertion be proven in F, thus true in F.
Well this response only makes me more confused, not less.

I begun a thread that is unlikely being read to address the historical considerations involving "incompleteness". Originally, rational numbers were thought sufficient to define all numbers. But once the Pythagoras Theorem was proven, say using Euclid's system of reasoning, the hypotenuse was deemed, "unexpressible" by a rational expression.

This just implied that the system of real numbers must be expanded for being understood as "incomplete" without doing so. A newer system of reasoning can be added to the prior logic of Euclid and any theories of Number it assumed in extension of its postulates (common notions, that is). The question was whether we could have a universal system of reasoning that links all mathematics under one universal system?

We can PRETEND that there IS such a universal logic system, say 'G', and ask if we can show it to lead to a contradiction. However, this assumes some apriori assumptions about the system we permit to be useful to judge this. For this simple question, we already would be assuming at least two factors in this very challenge: that we can "pretend", and that "contradiction" is something we think is not permitted in our judging logic system.

We MAY try to be more inclusive by permitting a system that judges without bias to any apriori conditions. But such a system is itself "trivial", literally meaning, 3-values for semantic meaning. In other words, it is already understood that there can be universal machine (system logic) that covers all possible logic systems under one umbrella. But it would require an acceptance of "truth" to be relative and 'fuzzy' with respect to totality. This would also remove the meaning of seeking for some universal logic system because all systems would be considered logical no matter what in the most inclusive way.

So we have to pick a judging logic that is based upon the limited definitions of a 'finite' world: those that allow us to 'pretend' (a rule about assumptions) and have a system devoid of contradictions that lack closure. If we permit a system that lacks closure (completeness), we cannot have any reason to pose the question, "is there such a universal logic" or we'd be expanding the meaning of "logic" to include irrational systems of reasoning. [Note how I'm bringing back the word, "irrational", in context to the historical root of the word as well?]

So the first step is to find the simplest systems of reasoning that is closed with respect to a finite domain. We then want to pretend that this system is itself a qualifying 'judge' about other systems. Certainly if there IS a universal system based on its own minimal limits, then there has to be some truth about it being subject to that universal system itself.

Then we use this 'judging' logic, knowing it is complete, and then PRETEND that there is a universal system. IF this very system you are using is itself only a subset of a complete universal system, then we assume it complete as 'proven' by the meaning of "completeness" and then try to see if we can extend this calculating system to ask possible questions about some larger domain OUTSIDE of it's defined one.

When we PRETEND that there is a goal we can reach outside of the minimal rules of logic, if we find a contradiction, we must treat the contradiction as proof that what we pretended is either NOT true OR there is some newer system of reasoning that requires extending the minimum laws about things like contradiction or pretenses.

Using this system, with the assumption that contradiction is not allowed, leaves us into a contradiction of the judging system. This means that either the 'judging system' is flawed or at least, "incomplete".

So PRETEND that you are correct, ...that any Incompleteness Theorems are false. You have some system 'judging' this to be true on what minimal logical assumptions? If you permit contradiction in your system, you are always certain to be 'proved' correct because contradiction allows for more than two truth values, 'true', versus 'false'. Your ARE asserting the theorems 'false' with exclusive distinction are you not? If not, you would at best be saying they are both 'true' and 'false'. If you don't permit something to be both, then you default to a LIMITED system of logic as already understood by those making those theorems. That is, they already assumed non-contradiction as a primary limitation. And this further means that they, like you, require treating "truth" about these systems as BINARY, not tri-vial.

Using your system, you'd require to show that your own reasoning is itself binary in truth values or you are defaulting to assume your system to allow for contradiction and you'd be hypocritical to use it to demonstrate that an external rationale is at fault for assuming non-contradiction systems complete or not. This is something you do not seem to care to do as you rely on an assumption of external machines to do this for you (like the Lambda calculus or Prolog, etc).

The point is that the "theorems" used to show incompleteness are supposed to end in their own contradiction. This way you can ask if that 'judging system' itself using its minimal rule of non-contradiction is sufficient as a PARENT logic to all systems it can build upon it. (?) So either "G" is contradictory or not. If proven contradictory when we PRETEND it is the most sufficient machine to exhaustively cover all domains, then it means either that system is wrong or that it is correct but means there will always be some domain outside of it to which some other MORE complete logic is constructed from.

The theorems stand true precisely for your own apparent intent to be non-contradictory and yet be unable to do so without permitting all systems that include at least a third truth value.

This is where I interpret contradiction itself as a foundational reality to everything. It IS a 'trivial' system that implies everything possible in some greatest domain, I call "Totality". It rationally permits "contradiction" to be a force that constantly tries to BE consistent when it can only do so at the cost of becoming 'inconsistent' for trying. Thus a 'universal logic' can exist, but then reduces the meaning of such a 'logic' as inclusive of those things we think of as 'non-logical'. So we have to stick with some fixed limitation of the term 'logical' by some stricter subset of all possibilities or we lose meaning of 'logical' for nothing to be 'illogical'.
I only want to talk about this in terms of these four pages Tarski Undefinability proof.
247-248 http://liarparadox.org/247_248.pdf
275-276 http://liarparadox.org/Tarski_Undefinability_Proof.pdf

And my two page refutation of this proof:
https://www.researchgate.net/publicatio ... ly_Refuted
Logik
Posts: 4041
Joined: Tue Dec 04, 2018 12:48 pm

Re: Is there a sentence that proves itself is not provable?

Post by Logik »

PeteOlcott wrote: Sat Apr 06, 2019 6:23 am I only want to talk about this in terms of these four pages Tarski Undefinability proof.
247-248 http://liarparadox.org/247_248.pdf
275-276 http://liarparadox.org/Tarski_Undefinability_Proof.pdf

And my two page refutation of this proof:
https://www.researchgate.net/publicatio ... ly_Refuted

PeteOlcott wrote: Fri Mar 29, 2019 1:51 am Can this: ∃F∃G(G ↔ ~(F ⊢ G)) be shown to be satisfiable without Gödel numbers ?
Here is your black swan.

https://repl.it/repls/GigaDarkkhakiDebugging

Code: Select all

def implies(a,b)
  return ( (not a) or b ) 
end

def f; return true; end 

$toggle = true
def g; $toggle = (not $toggle ); end

g == (not implies(f,g) )
=> true
PeteOlcott
Posts: 1514
Joined: Mon Jul 25, 2016 6:55 pm

Re: Is there a sentence that proves itself is not provable?

Post by PeteOlcott »

Logik wrote: Sat Apr 06, 2019 7:15 am
PeteOlcott wrote: Sat Apr 06, 2019 6:23 am I only want to talk about this in terms of these four pages Tarski Undefinability proof.
247-248 http://liarparadox.org/247_248.pdf
275-276 http://liarparadox.org/Tarski_Undefinability_Proof.pdf

And my two page refutation of this proof:
https://www.researchgate.net/publicatio ... ly_Refuted

PeteOlcott wrote: Fri Mar 29, 2019 1:51 am Can this: ∃F∃G(G ↔ ~(F ⊢ G)) be shown to be satisfiable without Gödel numbers ?
Here is your black swan.

https://repl.it/repls/GigaDarkkhakiDebugging

Code: Select all

def implies(a,b)
  return ( (not a) or b ) 
end

def f; return true; end 

$toggle = true
def g; $toggle = (not $toggle ); end

g == (not implies(f,g) )
=> true

Like I said OUT-OF-SCOPE. I break his whole proof right here:

Formalizing the Liar Paradox in this way:
True(F, G) ↔ ~(F ⊢ G)
it becomes equivalent to Tarski’s third equation:
3) x ∉ Pr ↔ x ∈ Tr

By Truth axiom (3) we substitute ~True(F, G) for ~(F ⊢ G)
deriving True(F, G) ↔ ~True(F, G) ∴ the Liar_Paradox is false in F.

Here is the longer version: (Still only half a page)
http://liarparadox.org/index.php/2019/0 ... eexamined/
Atla
Posts: 6812
Joined: Fri Dec 15, 2017 8:27 am

Re: Is there a sentence that proves itself is not provable?

Post by Atla »

PeteOlcott wrote: Tue Mar 26, 2019 11:16 pm The Liar Paradox in English: "This sentence is not true."
The Liar Paradox in Symbolic Logic: LP ↔ ~⊢LP

If a logician hypothesizes that the symbolic logic is a precise translation of
the English sentence, then it is very easy for them to see the error of the Liar Paradox:
LP ↔ ~⊢LP // It can only be true if it can be proven that it is not provable
Maybe. I'm heavily into metaphysics, I only ever use classical 1-step logic to describe nature (or some 1-step fuzzy/quantum logic), that works perfectly, and I admit that I don't really understand what logicians are preoccupied with.

When I look at the Liar Paradox, I see it as inherently self-contradictory. It violates non-contradiction, and is therefore not logical, and that's the end of the story, it's discarded. So I'm not sure what
It can only be true if it can be proven that it is not provable
means, because wouldn't that assume that we are still treating it as logical?
PeteOlcott
Posts: 1514
Joined: Mon Jul 25, 2016 6:55 pm

Re: Is there a sentence that proves itself is not provable?

Post by PeteOlcott »

Atla wrote: Sun Apr 07, 2019 5:39 am
PeteOlcott wrote: Tue Mar 26, 2019 11:16 pm The Liar Paradox in English: "This sentence is not true."
The Liar Paradox in Symbolic Logic: LP ↔ ~⊢LP

If a logician hypothesizes that the symbolic logic is a precise translation of
the English sentence, then it is very easy for them to see the error of the Liar Paradox:
LP ↔ ~⊢LP // It can only be true if it can be proven that it is not provable
Maybe. I'm heavily into metaphysics, I only ever use classical 1-step logic to describe nature (or some 1-step fuzzy/quantum logic), that works perfectly, and I admit that I don't really understand what logicians are preoccupied with.

When I look at the Liar Paradox, I see it as inherently self-contradictory. It violates non-contradiction, and is therefore not logical, and that's the end of the story, it's discarded. So I'm not sure what
It can only be true if it can be proven that it is not provable
means, because wouldn't that assume that we are still treating it as logical?
Yes within the ordinary Liar Paradox it turns out that your reasoning is perfectly correct.
"This sentence is false" is semantically unsound.

Tarski based his proof on the strong Liar Paradox: "This sentence is not true"
which turns out to actually be false when formalized according to
3) x ∉ Pr ↔ x ∈ Tr // page 275 of the Tarski proof

and analyzed within the context of my universal truth predicate axiom:
(3) ∀F ∈ Formal_Systems ∀x ∈ WFF(F) (~True(F, x) ↔ ~(F ⊢ x))
This causes Tarski's whole proof to fail at step (3) shown above.
Logik
Posts: 4041
Joined: Tue Dec 04, 2018 12:48 pm

Re: Is there a sentence that proves itself is not provable?

Post by Logik »

PeteOlcott wrote: Sun Apr 07, 2019 4:12 am Like I said OUT-OF-SCOPE. I break his whole proof right here:
What does "out of scope" mean? I have given you an algorithm which satisfies the requirement which you put forth.
PeteOlcott wrote: Sun Apr 07, 2019 4:12 am True(F, G) ↔ ~(F ⊢ G)
The Ruby algorithm I gave you satisfies the above.

If that's "out of scope" then give me a formal expression that you want satisfied and I will give you an algorithm for it.
Last edited by Logik on Sun Apr 07, 2019 9:15 am, edited 2 times in total.
Logik
Posts: 4041
Joined: Tue Dec 04, 2018 12:48 pm

Re: Is there a sentence that proves itself is not provable?

Post by Logik »

PeteOlcott wrote: Sun Apr 07, 2019 6:26 am "This sentence is false" is semantically unsound.
It's not unsound.

It is an intentionally formulated sentence and as far as I can tell all intent is semantically sound.

The intention of the sentence is to express the concept of self-reference/recursion.

The reason it seems like a paradox is because you are trying to parse the symbols, not the intention behind the symbols. You have mistaken the symbols for meaning.

Simply: The sentence was encoded in system F (my mind), but it is being decoded in system G (your mind). And system G is oblivious to the notion of intent.
Post Reply