Pure functions? What the heck are those ?!?!

What is the basis for reason? And mathematics?

Moderators: AMod, iMod

Skepdick
Posts: 14520
Joined: Fri Jun 14, 2019 11:16 am

Re: Pure functions? What the heck are those ?!?!

Post by Skepdick »

PeteOlcott wrote: Thu Jul 21, 2022 7:04 pm I hope that you are correct about that, others seem to disagree.
For example others seem to think that a C function that depends
on local static memory is not a computable function. I formerly
had your point of view on this for many decades.
What do you mean by "local memory"? "Local" from which perspective - the program, or the operating system?

Modern operating systems allocate virtual memory to processes. Memory is protected in so far as one process cannot access another process' memory.
What's "local" to the operating system; or one program may not be local to another program.

What do you mean by "static"? Immutable?
PeteOlcott
Posts: 1527
Joined: Mon Jul 25, 2016 6:55 pm

Re: Pure functions? What the heck are those ?!?!

Post by PeteOlcott »

Skepdick wrote: Fri Jul 22, 2022 5:28 pm
PeteOlcott wrote: Thu Jul 21, 2022 7:04 pm I hope that you are correct about that, others seem to disagree.
For example others seem to think that a C function that depends
on local static memory is not a computable function. I formerly
had your point of view on this for many decades.
What do you mean by "local memory"? "Local" from which perspective - the program, or the operating system?

Modern operating systems allocate virtual memory to processes. Memory is protected in so far as one process cannot access another process' memory.
What's "local" to the operating system; or one program may not be local to another program.

What do you mean by "static"? Immutable?
I already said that I mean static local memory of a C function.
Skepdick
Posts: 14520
Joined: Fri Jun 14, 2019 11:16 am

Re: Pure functions? What the heck are those ?!?!

Post by Skepdick »

PeteOlcott wrote: Fri Jul 22, 2022 7:08 pm I already said that I mean static local memory of a C function.
I know what you said. I have no idea what it means.

Your semantic function is a mystery. Just as much as the syntax/semantics distinction is a mystery to you.
PeteOlcott
Posts: 1527
Joined: Mon Jul 25, 2016 6:55 pm

Re: Pure functions? What the heck are those ?!?!

Post by PeteOlcott »

Skepdick wrote: Fri Jul 22, 2022 7:27 pm
PeteOlcott wrote: Fri Jul 22, 2022 7:08 pm I already said that I mean static local memory of a C function.
I know what you said. I have no idea what it means.

Your semantic function is a mystery. Just as much as the syntax/semantics distinction is a mystery to you.
Semantics (computer science)
https://en.wikipedia.org/wiki/Semantics ... r_science)
Skepdick
Posts: 14520
Joined: Fri Jun 14, 2019 11:16 am

Re: Pure functions? What the heck are those ?!?!

Post by Skepdick »

PeteOlcott wrote: Sat Jul 23, 2022 1:23 pm
Skepdick wrote: Fri Jul 22, 2022 7:27 pm
PeteOlcott wrote: Fri Jul 22, 2022 7:08 pm I already said that I mean static local memory of a C function.
I know what you said. I have no idea what it means.

Your semantic function is a mystery. Just as much as the syntax/semantics distinction is a mystery to you.
Semantics (computer science)
https://en.wikipedia.org/wiki/Semantics ... r_science)
Pete, I know exactly what semantics is and how it's different to syntax.

Pointing me to a Wikipedia article instead of clarifying the meaning of your syntax in your own words only affirms that you don't understand the difference.
wtf
Posts: 1179
Joined: Tue Sep 08, 2015 11:36 pm

Re: Pure functions? What the heck are those ?!?!

Post by wtf »

Skepdick wrote: Tue Jul 19, 2022 9:39 am
Yes. Any externally-interruptable process is not a TM.
Ok. But it also happens to be the case that systems composed of multiple, mutually interrupting and intercommunicating processes, nevertheless gives no increase in computational power. You still can't break the Church-Turing thesis limit.
Skepdick wrote: Tue Jul 19, 2022 9:39 am In practice each computer in the world is interacting with other computers AND with the humans using them. Humans drive interrupts. Keyboard, Mouse clicks...
Ok. If I understand you, you are making the point that human input is a randomizing factor.

Well ok, suppose it is. It's well-known that nondeterministic automata still don't give any computational advantage over deterministic ones. So you can have an algorithm that branches based on the low-order bit of the femtosecond timestamp of the next neutrino to hit your detector; but it still will not be able to compute anything that a TM already can't.

It is possible that it might be able to do it more efficiently. That is a question for the subject of complexity theory. But in computability theory, there is no benefit or improvement.
Skepdick wrote: Tue Jul 19, 2022 9:39 am External interrupts render the system open and non-deterministic.
I can perfectly well agree with this statement. But it doesn't MEAN anything. It doesn't give you any increase in computational power relative to a basic TM.

Not only that, even TMs can be said to have "external input." If I have a TM, it will behave differently depending on what input you give to it. If you have a TM called T, say, that inputs a natural number and outputs a natural number (such as, for example, inputting the number n, and outputting the n-th decimal digit of pi) a human might supply inputs randomly.

You call that nondeterminism. But it's still a TM. You can regard any external input as just an input to a Turing machine. And regardless, it doesn't matter. It doesn't have any more computational power than a basic old TM.
Skepdick wrote: Tue Jul 19, 2022 9:39 am Then I must be mistaken in the finer details. If the process calculi is deterministic then the process calculi is as inadequate in accounting for modern computers as turing machines are!
It's nondeterministic in the sense that humans can supply random input. Just as they could to a Turing machine by randomly choosing the input. But IT DOESN'T MATTER. A nondeterministic computation has no more computational power than a deterministic one.

Perhaps you are saying that it might have a more favorable complexity class. It well might. But it doesn't have a more favorable computability class, because you can't break the Church-Turing thesis barrier using nondeterministic or random inputs.

I hope I've made my point clear.

* Nondeterministic computations don't have any more computational power than deterministic ones.

* So I would say that I don't understand the point you're trying to make. I actually AGREE with your premise, that modern distributed multiprocessing systems are essentially random. If you're running an online travel site, you don't know if the next person is going to log in and book a particular hotel room, plane flight, and rental car. It's unpredictable.

But it doesn't give you any more computational power. So what exact point are you trying to make with this, once I've totally agreed with your premise?
Skepdick
Posts: 14520
Joined: Fri Jun 14, 2019 11:16 am

Re: Pure functions? What the heck are those ?!?!

Post by Skepdick »

wtf wrote: Sun Jul 24, 2022 9:38 am Ok. But it also happens to be the case that systems composed of multiple, mutually interrupting and intercommunicating processes, nevertheless gives no increase in computational power. You still can't break the Church-Turing thesis limit.
That's not true. It depends on what you are inter-connected to.

If the inter-connected things are able to decide Turing-undecidable problems then that is obviously a more powerful system.

Perhaps we are disagreeing about the definition of "power"? Whatever "power" means - if I can decide on more things that you can then I am more powerful than you.
wtf wrote: Sun Jul 24, 2022 9:38 am Ok. If I understand you, you are making the point that human input is a randomizing factor.
I am making the point that being connected to a human allows decisions to be made about Turing-undecidable problems.

All of those undecidable semantic properties that Rice's theorem talks about. A TM can't decide on them. A human can.

Is this matrix of pixels a picture of a cat?
wtf wrote: Sun Jul 24, 2022 9:38 am It is possible that it might be able to do it more efficiently. That is a question for the subject of complexity theory. But in computability theory, there is no benefit or improvement.
Well, yeah! Efficiency is one of those semantic properties that humans can decide on and Turing machines cannot!
wtf wrote: Sun Jul 24, 2022 9:38 am I can perfectly well agree with this statement. But it doesn't MEAN anything. It doesn't give you any increase in computational power relative to a basic TM.
Of course it does. I am 100% certain that a program will terminate - if I send it the signal to terminate. It means I can determine whether a program will terminate; while a Turing machine cannot determine that.

It may not me meaningful in denotational semantics, but it is meaningful to humans.
wtf wrote: Sun Jul 24, 2022 9:38 am It doesn't have any more computational power than a basic old TM.
Then your metric for computational power is qualitatively blind. Runtime bounds matter.

A computrer which solves a problem in one operation is not the same thing as a computer which solves a problem in two operations.
wtf wrote: Sun Jul 24, 2022 9:38 am It's nondeterministic in the sense that humans can supply random input. Just as they could to a Turing machine by randomly choosing the input. But IT DOESN'T MATTER. A nondeterministic computation has no more computational power than a deterministic one.
So you don't make a semantic/qualitative distinction between 2x programs which run for 5 seconds in series and 2x programs which run for 5 seconds in parallel ?!?

It's pretty obvious that a computational system which obtains a result in 1/nth the time is n times as powerful.
wtf wrote: Sun Jul 24, 2022 9:38 am Perhaps you are saying that it might have a more favorable complexity class. It well might. But it doesn't have a more favorable computability class, because you can't break the Church-Turing thesis barrier using nondeterministic or random inputs.
No. I am talking about hypercomputations. Stuff beyond the Church-Turing barrier. The most trivial example being the question "Is x=x a theorem?"
wtf wrote: Sun Jul 24, 2022 9:38 am But it doesn't give you any more computational power. So what exact point are you trying to make with this, once I've totally agreed with your premise?
The point I am making is that the Church-Turing thesis is a terrible model of computation in 2022! It's too incomplete. Too theoretical.

It misses out on a bunch of qualitative/empirical facts which are part and parcel of the human-computer interactions!

Latency/responsiveness being one of them.
wtf
Posts: 1179
Joined: Tue Sep 08, 2015 11:36 pm

Re: Pure functions? What the heck are those ?!?!

Post by wtf »

Skepdick wrote:
That's not true. It depends on what you are inter-connected to.

If the inter-connected things are able to decide Turing-undecidable problems then that is obviously a more powerful system.
Perfectly true. This is basic computability theory. You start with a basic TM and keep adding oracles. You get a well-ordered collection of increasingly powerful systems of computation. Turing defined all of this in the 1930's and 1940's.
Skepdick wrote:
Perhaps we are disagreeing about the definition of "power"? Whatever "power" means - if I can decide on more things that you can then I am more powerful than you.
Perfectly well agreed. This is [url=https://en.wikipedia.org/wiki/Computability_theory]computability theory: the study of what functions are computable by algorithms.[url=

Now [url=https://en.wikipedia.org/wiki/Computati ... ity_theory]complexity theory[/i] is the study of the runtime time and/or space efficiency, as a function of the size of the input, of a given computable function.

So (as an example) f(x) = x^2 and f(x) = e^x are both computable functions of real numbers; but x^2 has polynomial time growth and e^x has exponential time growth, so in complexity theory, e^x is considered faster or better.
Skepdick wrote:
I am making the point that being connected to a human allows decisions to be made about Turing-undecidable problems.
If that were true, some human would have solved the Halting problem; and we can't do that. Or at least it's not known if a human can solve the Halting problem. If a human can do that, then humans can do something that machines can't, therefore our minds are not Turing machines. This is a rather important philosophical question, which is currently open.
Skepdick wrote:
All of those undecidable semantic properties that Rice's theorem talks about. A TM can't decide on them. A human can.
I looked up Rice's theorem and did not see the relevance. It's unknown if a human can solve the general Halting problem. Believe me if it were known, one way or the other, we'd have all heard about it.
Skepdick wrote:
Is this matrix of pixels a picture of a cat?
Not following the intent of your question. Are the pixels that make up this post, sentences in the English language?

Skepdick wrote: Well, yeah! Efficiency is one of those semantic properties that humans can decide on and Turing machines cannot!
Complexity theory is a different branch of CS. Given that a function is Turing-computable, we can ask how efficient it is as a function of the size of the input. It's unclear to me if computing the efficiency of a given algorithm is itself noncomputable. I suspect it's perfectly well computable.
Skepdick wrote: Of course it does. I am 100% certain that a program will terminate - if I send it the signal to terminate. It means I can determine whether a program will terminate; while a Turing machine cannot determine that.
Saying that you can always pull the power plug out of your computer is not regarded by computer scientists as a refutation of Turing's proof of the unsolvability of the general Halting problem.
Skepdick wrote: It may not me meaningful in denotational semantics, but it is meaningful to humans.
A lot of things are meaningful to humans that are not meaningful to computers.
Skepdick wrote: Then your metric for computational power is qualitatively blind. Runtime bounds matter.
My metric? I'm only explaining to you the distinction made in the field of computer science between computability and complexity.

But you are contradicting yourself. Earlier you said that the measure of computational power is that one system is more powerful than another if it can compute more things. Well, that's exactly what computability theory is about. It says nothing about the efficiencey of a given computation.

The question of whether something can be computed is computability theory. The question of how efficiently it can be computed is complexity theory. I didn't make this up, I'm just explaining to you what these standard distinctions are. I supplied relevant Wikipedia links.
Skepdick wrote: A computrer which solves a problem in one operation is not the same thing as a computer which solves a problem in two operations.
Perfectly true. They both compute the same thing, so they have the same computational power. One is more efficient than the other, so it has more efficient complexity.

I don't see why this should trouble you. Two things could have the same mass but a different size. There are different ways of measuring things.
Skepdick wrote: So you don't make a semantic/qualitative distinction between 2x programs which run for 5 seconds in series and 2x programs which run for 5 seconds in parallel ?!?
Well if they both run in 5 seconds, what do we care about the program internals? I don't follow the question.

Skepdick wrote: It's pretty obvious that a computational system which obtains a result in 1/nth the time is n times as powerful.
Ok. You have contradicted something you said earlier. I am going to bold the following so that you'll notice it.

I hope you will read what I'm about to explain, and thoughtfully process it in your own mind

Conside a set of functions, say from the natural numbers to the natural numbers. One function's the squaring function, and another might represent some polynomial, and some other one might be entirely random.

A given "system of logic," as Turing put it, is a TM-like system along with some oracles. So the basic TM system is capable of computing a certain subset of those functions. Those are the computable functions. Call this system T0.

If we add an oracle for the Halting problem, we get a more powerful system, T1. T1 can compute all the functions T0 can, plus some more.

But by the same proof that the Halting problem for T0 is unsolvable, so is T1, and we can add yet a second oracle to get T2. And so forth. We get a sequence of ever more powerful systems, each one able to compute more things that the prior system.

So earlier you wrote,
Skepdick wrote: If the inter-connected things are able to decide Turing-undecidable problems then that is obviously a more powerful system.
RIGHT! You are agreeing that one system is more powerful than another if it can compute everything the weaker system can compute, plus some other stuff.

Just able to compute. No mention of efficiency! And you are correct, this is the definition of power in computability theory.

And now you say:
Skepdick wrote: It's pretty obvious that a computational system which obtains a result in 1/nth the time is n times as powerful.
Yes in terms of complexity, or efficiency. But not computability. By your own agreement earlier, if the two systems can compute the same set of functions, they have the same computational power; EVEN if one is more efficient than the other.

[And by the way, in complexity theory, 7x is as good as x. Constant powers don't matter when it comes to complexity class. What matters is algorithmic complexity in the sense that exponentials grow much more fast than polynomials. But no actual distinction is made within the polynomials, for example. x and 7x and a million x have exactly the same efficiency. Constant coefficients don't count. For what it's worth, this was peripheral to the main discussion].

So I hope you will get clear in your mind the distinction between:

* Computability: a system is more powerful than another by virtue of being able to compute more things. Efficiency doesn't matter.

* Complexity: a system is more powerful than another if it computes the same things, but more efficiently in terms of classies like logarithmic, polynomial, exponential, etc.

So your 1/7 example only counts in complexity theory/ [And not even then, for the reason I noted. Constant multipliers don't make a difference]. If they can both compute the same things, they have the same power -- which you already agreed to earlier.

I hope this is clear.


Skepdick wrote: No. I am talking about hypercomputations. Stuff beyond the Church-Turing barrier.
Hypercomputation. Doing infinitely many operations in finite time. Isn't that completely hypothetical? Impossible to implement according to the currently understood laws of physics?

That's all well and good to speculate about hypercomputation, but then how on earth can you care about efficiency? If you have hypercomputation, you don't need efficiency at all! Everything gets done in arbitrarily small amounts of time.

So again, you are going off in contradictory directions. If you want to talk about progam efficiency, that's a very real-world concern. Hypercomputation is a completely other-world concern, the realm of the purely abstract. And if we had computation, we would not need to care at all about efficience! Who cares how inefficient an algorithm is if we can execute it with infinite speed!!

Skepdick wrote: The most trivial example being the question "Is x=x a theorem?"
This I must say baffles me. I'm lost. I don't understand the question, and I dont see how this is an example of a hypercomputation. If you can explain what you are thinking with any clarity, I'd be curious. This seems very murky to me. What does "Is x = x a theorem?" supposed to mean, and what does hypercomputation have to do with it?
Skepdick wrote: The point I am making is that the Church-Turing thesis is a terrible model of computation in 2022! It's too incomplete. Too theoretical.
Too theoretical. Well yes, it lives in the realm of theoretical computer science. Are you objecting to CS professors plying their trade?

You seem to be unhappy at the practice of computer science. I'm not sure why this is. I'm not the lord high defender of the realm. I'm just trying to explain what little I know about the subject. There's computability and there's complexity, and they're related but different.

For what it's worth, complexity theory is a more RECENT concern. So in the 1930s people cared about computability; and most contemporary research is in complexity. Perhaps this is what you're getting at. Computability is too broad a brush, and complexity captures more of the fine points. And as computer science progresses, attention is being turned more toward complexity. I don't think anyone would disagree with you. The big theorems in computability are from the era of Turing, Gödel, Church, Post, et. al. in the 1930s.

I think this is just the phenomenon you noticed. Computability is an older notion. It was discovered and thought about first. It doesn't slice as fine as complexity. And when it comes to real-world impact, complexity is everything, because real-world resources are limited. A polynomial time algorithm to factor integers is a really big deal, much better than exponential. That's why quantum computers can in theory break all contemporary cryptography. So complexity theory is more practical than computability theory.
Skepdick wrote: It misses out on a bunch of qualitative/empirical facts which are part and parcel of the human-computer interactions!
Just as zoology misses out on the fine points of quark confinement. Computability and complexity operate at different levels of discourse. They're concerned with different things.
Skepdick wrote: Latency/responsiveness being one of them.
Ok. Who is disagreeing with you? Not me, surely. Computability and complexity. Two overlapping but separate ways of looking at computations.
Skepdick
Posts: 14520
Joined: Fri Jun 14, 2019 11:16 am

Re: Pure functions? What the heck are those ?!?!

Post by Skepdick »

Skipping over all the agreement - we are on the same page.
wtf wrote: Sat Jul 30, 2022 4:44 am is the study of the runtime time and/or space efficiency, as a function of the size of the input, of a given computable function.

So (as an example) f(x) = x^2 and f(x) = e^x are both computable functions of real numbers; but x^2 has polynomial time growth and e^x has exponential time growth, so in complexity theory, e^x is considered faster or better.
It's not about betterness or worseness. It's about the capabilities of different computational models.

Infinite Time Turing Machines (ITTMS) can do things that Turing Machines (TM) cannot! They can evaluate infinite computations.
So even functions with ω time complexity are computable in this model. Such as the co-recursive functions which generates 0 and ALL of its successors. An ITTM evaluates this corecursive function to ℕ.

On an ITTM the generating function terminates. On a Turing Machines it doesn't.

When you corecursively compute ℕ on an ITTM the computation eventually halts and puts ℕ on the tape as the value of the function.

That's the epitome of contractible spaces/-2 homotopy types. Data compression. This is what we, humans do! We pretend that the corecursive definition of ℕ halts! It doesn't. But for the purpose of symbolic computation we pretend.

An ITTM can do data compression on infinite data structures. A TM cannot.

wtf wrote: Sat Jul 30, 2022 4:44 am If that were true, some human would have solved the Halting problem; and we can't do that.
Obviously we can't solve the general case. But we can and do solve particular cases all the time! Like any oracle would. Like a TM can't.

Such as the particular case of evaluating the corecursive definition of n = 0, suc(n), suc(succ(n))... to ℕ !
wtf wrote: Sat Jul 30, 2022 4:44 am Or at least it's not known if a human can solve the Halting problem. If a human can do that, then humans can do something that machines can't, therefore our minds are not Turing machines. This is a rather important philosophical question, which is currently open.
Our minds aren't Turing Machines! Our minds might be (closer to) Infinite Time Turing machines!

We pretend as if we can see past the end of non-halting/infinite computations! Such as the computation of ℕ.
wtf wrote: Sat Jul 30, 2022 4:44 am I looked up Rice's theorem and did not see the relevance. It's unknown if a human can solve the general Halting problem. Believe me if it were known, one way or the other, we'd have all heard about it.
The general case is NOT solvable. Nobody is seeking perfection here. We are only seeking to automate what is automatable.

Far less is automatable if you use a TM as your model of computation, instead of an ITTM.
wtf wrote: Sat Jul 30, 2022 4:44 am Not following the intent of your question. Are the pixels that make up this post, sentences in the English language?
The intent is to show that there is a class of decision problems you can solve, that a Turing machine cannot.
It's obvious that you can determine whether a particular corecursive definition amounts to ℕ, whereas a TM cannot. Humans have at least some of the computational power of an ITTM.

It's far less obvious whether there is a class of decision problems you can solve that an ITTM cannot.
wtf wrote: Sat Jul 30, 2022 4:44 am Complexity theory is a different branch of CS. Given that a function is Turing-computable, we can ask how efficient it is as a function of the size of the input. It's unclear to me if computing the efficiency of a given algorithm is itself noncomputable. I suspect it's perfectly well computable.
Complexity theory becomes irrelevant on an ITTM. You have a model of computation which can perform super tasks.

Computing the natural numbers with a co-recursive function with a base-case 0 is input size 1, output size ω. That's just the Y-axis of the Big-O graph!

wtf wrote: Sat Jul 30, 2022 4:44 am Saying that you can always pull the power plug out of your computer is not regarded by computer scientists as a refutation of Turing's proof of the unsolvability of the general Halting problem.
But it is regarded as a valid evaluation strategy - lazy evaluation! I can suspend some infinitary computation (as recognised by me!) at any point!
And decide that it will never terminate. Such as any computation of ℕ.

The Agda people used to view coinduction/corecursion exactly this way. until they switched to think about it as just processing infinite datastreams.
wtf wrote: Sat Jul 30, 2022 4:44 am A lot of things are meaningful to humans that are not meaningful to computers.
Yeah! And a lot of things are meaningful to ITTMS that are not meaningful to TMs!

So it depends on what you mean by "computer". It depends on which model of computation you assume as foundational..
wtf wrote: Sat Jul 30, 2022 4:44 am My metric? I'm only explaining to you the distinction made in the field of computer science between computability and complexity.
And I am explaining to you that they are one and the same problem from an ITTM perspective. Searching the proof-space is really really hard. And it's even harder to know when to stop searching.
wtf wrote: Sat Jul 30, 2022 4:44 am But you are contradicting yourself. Earlier you said that the measure of computational power is that one system is more powerful than another if it can compute more things. Well, that's exactly what computability theory is about. It says nothing about the efficiencey of a given computation.

The question of whether something can be computed is computability theory. The question of how efficiently it can be computed is complexity theory. I didn't make this up, I'm just explaining to you what these standard distinctions are.
The question of efficiency is incoherent for infinitary computations.

Computing ℕ halts on an ITTM.
Computing ℕ doesn't halt on a TM.

Which one is "more efficient?"
wtf wrote: Sat Jul 30, 2022 4:44 am Perfectly true. They both compute the same thing, so they have the same computational power.
They are both computing the same thing. ℕ. One finishes computing ℕ. The other doesn't.

The question of efficiency and power become one and the same question.
wtf wrote: Sat Jul 30, 2022 4:44 am I don't see why this should trouble you. Two things could have the same mass but a different size. There are different ways of measuring things.
It matters to me how long it takes to measure stuff. Measuring measurements matters.
wtf wrote: Sat Jul 30, 2022 4:44 am Well if they both run in 5 seconds, what do we care about the program internals? I don't follow the question.
Well, you don't care about time at all, right? So why should you care whether the computation takes 0 or ω ? Just as long as it terminates.
wtf wrote: Sat Jul 30, 2022 4:44 am Conside a set of functions, say from the natural numbers to the natural numbers. One function's the squaring function, and another might represent some polynomial, and some other one might be entirely random.
This is a terrible framing! For functions of the type ℕ -> ℕ everything is Turing-computable. For higher order types representation becomes a relevant factor for whether a function is computable, or not!
wtf wrote: Sat Jul 30, 2022 4:44 am A given "system of logic," as Turing put it, is a TM-like system along with some oracles. So the basic TM system is capable of computing a certain subset of those functions. Those are the computable functions. Call this system T0.
These are the Turing-computable functions. ℕ -> ℕ !
wtf wrote: Sat Jul 30, 2022 4:44 am If we add an oracle for the Halting problem, we get a more powerful system, T1. T1 can compute all the functions T0 can, plus some more.
I thought we agreed about that already?
wtf wrote: Sat Jul 30, 2022 4:44 am But by the same proof that the Halting problem for T0 is unsolvable, so is T1, and we can add yet a second oracle to get T2. And so forth. We get a sequence of ever more powerful systems, each one able to compute more things that the prior system.
We undestand each other perfectly! I am pointing out that T0 can't even corecursively compute ℕ! Agda can.

Draw some inferences from that on whether Agda is a more or less powerful model than a TM.
wtf wrote: Sat Jul 30, 2022 4:44 am RIGHT! You are agreeing that one system is more powerful than another if it can compute everything the weaker system can compute, plus some other stuff.

Just able to compute. No mention of efficiency! And you are correct, this is the definition of power in computability theory.
There's really no point repeating myself here, but hey... efficiency is moot when one computation halts and the other doesn't!
wtf wrote: Sat Jul 30, 2022 4:44 am Yes in terms of complexity, or efficiency. But not computability. By your own agreement earlier, if the two systems can compute the same set of functions, they have the same computational power; EVEN if one is more efficient than the other.
OK so you are then saying that a computation which completes in 0 steps is the same as a computation which completes in ω steps?
Or are you saying that a computation which takes ω steps is the same thing as a non-halting computation?
wtf wrote: Sat Jul 30, 2022 4:44 am [And by the way, in complexity theory, 7x is as good as x. Constant powers don't matter when it comes to complexity class. What matters is algorithmic complexity in the sense that exponentials grow much more fast than polynomials. But no actual distinction is made within the polynomials, for example. x and 7x and a million x have exactly the same efficiency. Constant coefficients don't count. For what it's worth, this was peripheral to the main discussion].
Well. It depends! Are infinities constants? They are in Agda!
The type ∞ A can be seen as a suspended computation of type A. It comes with delay and force functions:

♯_ : ∀ {a} {A : Set a} → A → ∞ A
♭ : ∀ {a} {A : Set a} → ∞ A → A

Values of coinductive types can be constructed using corecursion, which does not need to terminate, but has to be productive. As an approximation to productivity the termination checker requires that corecursive definitions are guarded by coinductive constructors. As an example the infinite “natural number” can be defined as follows:

inf : Coℕ
inf = suc (♯ inf)
wtf wrote: Sat Jul 30, 2022 4:44 am So I hope you will get clear in your mind the distinction between:

* Computability: a system is more powerful than another by virtue of being able to compute more things. Efficiency doesn't matter.
So just to double-check my understanding of your understanding.

You view 0-step computations and ω-step computations as being equivalently efficient?
wtf wrote: Sat Jul 30, 2022 4:44 am * Complexity: a system is more powerful than another if it computes the same things, but more efficiently in terms of classies like logarithmic, polynomial, exponential, etc.
infinity is the highest complexity class in complexity theory.
wtf wrote: Sat Jul 30, 2022 4:44 am Hypercomputation. Doing infinitely many operations in finite time. Isn't that completely hypothetical? Impossible to implement according to the currently understood laws of physics?
Well! It depends on how you interpret our current situation.

In order to define ℕ we have to define ALL of its infinitely elements.
in order to define ℕ we have to do infinitely many things (definitions) in finite time.

And yet... Here we are. Pretending that Peano's co-recursive definition of ℕ has already done that for us.
wtf wrote: Sat Jul 30, 2022 4:44 am That's all well and good to speculate about hypercomputation, but then how on earth can you care about efficiency? If you have hypercomputation, you don't need efficiency at all! Everything gets done in arbitrarily small amounts of time.
Well. Everythign gets done in arbitrarily small infinities - sure.

Those are still pretty long waiting times for us humans.
wtf wrote: Sat Jul 30, 2022 4:44 am So again, you are going off in contradictory directions. If you want to talk about progam efficiency, that's a very real-world concern. Hypercomputation is a completely other-world concern, the realm of the purely abstract.
The connection between the abstract and the realizable IS the interplay between model theory and proof theory! Can you write the proof for an ITTM such as a human?
wtf wrote: Sat Jul 30, 2022 4:44 am And if we had computation, we would not need to care at all about efficience! Who cares how inefficient an algorithm is if we can execute it with infinite speed!!
This is PRECISELY what you are doing (in your head) when you pretend that corecursive definitions terminate!!!

You are pretending that Peano's axioms executed in your head with near-infinite speed and produced ℕ!
wtf wrote: Sat Jul 30, 2022 4:44 am This I must say baffles me. I'm lost. I don't understand the question, and I dont see how this is an example of a hypercomputation. If you can explain what you are thinking with any clarity, I'd be curious. This seems very murky to me. What does "Is x = x a theorem?" supposed to mean, and what does hypercomputation have to do with it?
I am drawing a distinction between propositional and judgmental equality! Dependent type theory.

=(x,x) is a predicate which depends on a Boolean.

To ask the question "Is it a theorem?" is to ask whether =(x,x) evaluates to Boolean:True.

Does it? Show me a proof.

wtf wrote: Sat Jul 30, 2022 4:44 am Too theoretical. Well yes, it lives in the realm of theoretical computer science. Are you objecting to CS professors plying their trade?
I am objecting to its incompleteness. ITTMS are also theoretical. But ITTMS are a better theory than TMs.
wtf wrote: Sat Jul 30, 2022 4:44 am You seem to be unhappy at the practice of computer science. I'm not sure why this is. I'm not the lord high defender of the realm. I'm just trying to explain what little I know about the subject. There's computability and there's complexity, and they're related but different.
And they are also moot in the ITTM model where we can compute functions of infinite time complexity, but near-zero space complexity.

They compress really well! Like contractible spaces.
wtf wrote: Sat Jul 30, 2022 4:44 am Computability is too broad a brush, and complexity captures more of the fine points. And as computer science progresses, attention is being turned more toward complexity. I don't think anyone would disagree with you. The big theorems in computability are from the era of Turing, Gödel, Church, Post, et. al. in the 1930s.
Yeah but it's not 1930 anymore! Corecursion happened in the 80s and 90s.
wtf wrote: Sat Jul 30, 2022 4:44 am I think this is just the phenomenon you noticed. Computability is an older notion. It was discovered and thought about first. It doesn't slice as fine as complexity. And when it comes to real-world impact, complexity is everything, because real-world resources are limited. A polynomial time algorithm to factor integers is a really big deal, much better than exponential. That's why quantum computers can in theory break all contemporary cryptography. So complexity theory is more practical than computability theory.
That's just one of the many points I am making. In more particular terms two things are said to be equal in Mathematics if all of their extensional properties are equal (Liebnitz' law). But if the runtime bounds of two proofs are different ... those are different extensional properties.

So the classical notion of equality seems to be blind to this nuance. And if you want to keep pretending that you only care about referentially transparent (total) functions then what of non-terminating coinductive functions?
wtf wrote: Sat Jul 30, 2022 4:44 am Just as zoology misses out on the fine points of quark confinement. Computability and complexity operate at different levels of discourse. They're concerned with different things.
They are concerned with different aspects of the same thing. Computation.

wtf wrote: Sat Jul 30, 2022 4:44 am Ok. Who is disagreeing with you? Not me, surely. Computability and complexity. Two overlapping but separate ways of looking at computations.
And the distinction lapses with an ITTM...
Last edited by Skepdick on Sat Jul 30, 2022 11:08 pm, edited 3 times in total.
Skepdick
Posts: 14520
Joined: Fri Jun 14, 2019 11:16 am

Re: Pure functions? What the heck are those ?!?!

Post by Skepdick »

wtf wrote: Sat Jul 30, 2022 4:44 am ...
I've spent far too much time repeating ITTM...
Another model of computation that exceeds the power of Turing machines are Hamkin’s infinite-time Turing machines [15]. We give here just a brief overview and recommend the cited reference for background reading.
An infinite-time Turing machine, or just machine, is like a Turing machine which is allowed to run infinitely long, with the computation steps counted by ordinal numbers. The machine has a finite program, an input tape, work tapes, an output tape, etc. We assume that the tape cells contain 0’s and 1’s. At successor ordinals the machine acts like an ordinary Turing machine. At limit ordinals it enters a special “limit” state, its heads are placed at the beginnings of the tapes, and the content of each tape cell is computed as the lim sup of the values written in the cell
[15]: Hamkins et al. (2000), “Infinite time Turing machines”

More precisely, if 𝑐𝛼 denotes the value of the cell 𝑐 at step 𝛼, then for a limit ordinal 𝛽 we have
𝑐𝛽 = 0 if∃𝛼<𝛽.∀𝛾.(𝛼≤𝛾<𝛽⇒𝑐𝛼 =0)
𝑐𝛽 = 1 otherwise.

The machine terminates by entering a special halt state, or it may run forever. It turns out that a machine which has not terminated by step 𝜔1 runs forever.

We can think of machines as computing partial functions 2N ⇀ 2N where 2 = {0, 1}: initialize the input tape with an infinite binary sequence 𝑥 ∈ 2N, run the machine, and observe the contents of the output tape if and when the machine terminates. We can also consider infinite time computation of partial functions N ⇀ N: initialize the input tape with the input number, run the machine, and interpret the contents of the output tape as a natural number, where we ignore anything that is beyond the position of the output head. By performing the usual encoding tricks, we can feed the machines more complicated inputs and outputs, such as pairs, finite lists, and even infinite lists of numbers. We say that a function is infinite-time computable if there is an infinite-time Turing machine that computes it.

The power of infinite-time Turing machines is vast and extends far beyond the halting problem for ordinary Turing machines, although of course they cannot solve their own halting problem. For example, for every Π1/1-subset 𝑆 ⊆ 2N there is a machine which, given 𝑥 ∈ 2N on its input tape, terminates and decides whether 𝑥 ∈ 𝑆.
P.S This is what Olcott is babbling about. But he doesn't have the background or language to express it.

Nor do I for that matter. Which is why I am quoting other people's work.

But it sure seems like a bloody long-winded, erudite, and unnecessarily complicated way of saying "ITTMs can do type-checking of higher-order types"
Post Reply