Skepdick wrote: ↑
That's not true. It depends on what you are inter-connected to.
If the inter-connected things are able to decide Turing-undecidable problems then that is obviously a more powerful system.
Perfectly true. This is basic computability theory. You start with a basic TM and keep adding oracles. You get a well-ordered collection of increasingly powerful systems of computation. Turing defined all of this in the 1930's and 1940's.
Skepdick wrote: ↑
Perhaps we are disagreeing about the definition of "power"? Whatever "power" means - if I can decide on more things that you can then I am more powerful than you.
Perfectly well agreed. This is [url=
https://en.wikipedia.org/wiki/Computability_theory]computability theory: the study of what functions are computable by algorithms.[url=
Now [url=
https://en.wikipedia.org/wiki/Computati ... ity_theory]complexity theory[/i] is the study of the runtime time and/or space efficiency, as a function of the size of the input, of a given computable function.
So (as an example) f(x) = x^2 and f(x) = e^x are both computable functions of real numbers; but x^2 has polynomial time growth and e^x has exponential time growth, so in complexity theory, e^x is considered faster or better.
Skepdick wrote: ↑
I am making the point that being connected to a human allows decisions to be made about Turing-undecidable problems.
If that were true, some human would have solved the Halting problem; and we can't do that. Or at least it's not known if a human can solve the Halting problem. If a human can do that, then humans can do something that machines can't, therefore our minds are not Turing machines. This is a rather important philosophical question, which is currently open.
Skepdick wrote: ↑
All of those undecidable semantic properties that Rice's theorem talks about. A TM can't decide on them. A human can.
I looked up Rice's theorem and did not see the relevance. It's unknown if a human can solve the general Halting problem. Believe me if it were known, one way or the other, we'd have all heard about it.
Skepdick wrote: ↑
Is this matrix of pixels a picture of a cat?
Not following the intent of your question. Are the pixels that make up this post, sentences in the English language?
Skepdick wrote: ↑
Well, yeah! Efficiency is one of those semantic properties that humans can decide on and Turing machines cannot!
Complexity theory is a different branch of CS. Given that a function is Turing-computable, we can ask how efficient it is as a function of the size of the input. It's unclear to me if computing the efficiency of a given algorithm is itself noncomputable. I suspect it's perfectly well computable.
Skepdick wrote: ↑
Of course it does. I am 100% certain that a program will terminate - if I send it the signal to terminate. It means I can determine whether a program will terminate; while a Turing machine cannot determine that.
Saying that you can always pull the power plug out of your computer is not regarded by computer scientists as a refutation of Turing's proof of the unsolvability of the general Halting problem.
Skepdick wrote: ↑
It may not me meaningful in denotational semantics, but it is meaningful to humans.
A lot of things are meaningful to humans that are not meaningful to computers.
Skepdick wrote: ↑
Then your metric for computational power is qualitatively blind. Runtime bounds matter.
My metric? I'm only explaining to you the distinction made in the field of computer science between computability and complexity.
But you are contradicting yourself. Earlier you said that the measure of computational power is that one system is more powerful than another if it can compute more things. Well, that's exactly what computability theory is about. It says nothing about the efficiencey of a given computation.
The question of whether something can be computed is computability theory. The question of how efficiently it can be computed is complexity theory. I didn't make this up, I'm just explaining to you what these standard distinctions are. I supplied relevant Wikipedia links.
Skepdick wrote: ↑
A computrer which solves a problem in one operation is not the same thing as a computer which solves a problem in two operations.
Perfectly true. They both compute the same thing, so they have the same computational power. One is more efficient than the other, so it has more efficient complexity.
I don't see why this should trouble you. Two things could have the same mass but a different size. There are different ways of measuring things.
Skepdick wrote: ↑
So you don't make a semantic/qualitative distinction between 2x programs which run for 5 seconds in series and 2x programs which run for 5 seconds in parallel ?!?
Well if they both run in 5 seconds, what do we care about the program internals? I don't follow the question.
Skepdick wrote: ↑
It's pretty obvious that a computational system which obtains a result in 1/nth the time is n times as powerful.
Ok. You have contradicted something you said earlier. I am going to bold the following so that you'll notice it.
I hope you will read what I'm about to explain, and thoughtfully process it in your own mind
Conside a set of functions, say from the natural numbers to the natural numbers. One function's the squaring function, and another might represent some polynomial, and some other one might be entirely random.
A given "system of logic," as Turing put it, is a TM-like system along with some oracles. So the basic TM system is capable of computing a certain subset of those functions. Those are the computable functions. Call this system T0.
If we add an oracle for the Halting problem, we get a more powerful system, T1. T1 can compute all the functions T0 can, plus some more.
But by the same proof that the Halting problem for T0 is unsolvable, so is T1, and we can add yet a second oracle to get T2. And so forth. We get a sequence of ever more powerful systems, each one able to compute more things that the prior system.
So earlier you wrote,
Skepdick wrote: ↑
If the inter-connected things are able to decide Turing-undecidable problems then that is obviously a more powerful system.
RIGHT! You are agreeing that one system is more powerful than another if it can compute everything the weaker system can compute, plus some other stuff.
Just able to compute. No mention of efficiency! And you are correct, this is the definition of power in computability theory.
And now you say:
Skepdick wrote: ↑
It's pretty obvious that a computational system which obtains a result in 1/nth the time is n times as powerful.
Yes in terms of complexity, or efficiency. But not computability. By your own agreement earlier, if the two systems can compute the same set of functions, they have the same computational power; EVEN if one is more efficient than the other.
[And by the way, in complexity theory, 7x is as good as x. Constant powers don't matter when it comes to complexity class. What matters is algorithmic complexity in the sense that exponentials grow much more fast than polynomials. But no actual distinction is made within the polynomials, for example. x and 7x and a million x have exactly the same efficiency. Constant coefficients don't count. For what it's worth, this was peripheral to the main discussion].
So I hope you will get clear in your mind the distinction between:
* Computability: a system is more powerful than another by virtue of being able to compute more things. Efficiency doesn't matter.
* Complexity: a system is more powerful than another if it computes the same things, but more efficiently in terms of classies like logarithmic, polynomial, exponential, etc.
So your 1/7 example only counts in complexity theory/ [And not even then, for the reason I noted. Constant multipliers don't make a difference]. If they can both compute the same things, they have the same power --
which you already agreed to earlier.
I hope this is clear.
Skepdick wrote: ↑
No. I am talking about hypercomputations. Stuff beyond the Church-Turing barrier.
Hypercomputation. Doing infinitely many operations in finite time. Isn't that completely hypothetical? Impossible to implement according to the currently understood laws of physics?
That's all well and good to speculate about hypercomputation, but then how on earth can you care about efficiency? If you have hypercomputation, you don't need efficiency at all! Everything gets done in arbitrarily small amounts of time.
So again, you are going off in contradictory directions. If you want to talk about progam efficiency, that's a very real-world concern. Hypercomputation is a completely other-world concern, the realm of the purely abstract. And if we had computation, we would not need to care at all about efficience! Who cares how inefficient an algorithm is if we can execute it with infinite speed!!
Skepdick wrote: ↑
The most trivial example being the question "Is x=x a theorem?"
This I must say baffles me. I'm lost. I don't understand the question, and I dont see how this is an example of a hypercomputation. If you can explain what you are thinking with any clarity, I'd be curious. This seems very murky to me. What does "Is x = x a theorem?" supposed to mean, and what does hypercomputation have to do with it?
Skepdick wrote: ↑
The point I am making is that the Church-Turing thesis is a terrible model of computation in 2022! It's too incomplete. Too theoretical.
Too theoretical. Well yes, it lives in the realm of theoretical computer science. Are you objecting to CS professors plying their trade?
You seem to be unhappy at the practice of computer science. I'm not sure why this is. I'm not the lord high defender of the realm. I'm just trying to explain what little I know about the subject. There's computability and there's complexity, and they're related but different.
For what it's worth, complexity theory is a more RECENT concern. So in the 1930s people cared about computability; and most contemporary research is in complexity. Perhaps this is what you're getting at. Computability is too broad a brush, and complexity captures more of the fine points. And as computer science progresses, attention is being turned more toward complexity. I don't think anyone would disagree with you. The big theorems in computability are from the era of Turing, Gödel, Church, Post, et. al. in the 1930s.
I think this is just the phenomenon you noticed. Computability is an older notion. It was discovered and thought about first. It doesn't slice as fine as complexity. And when it comes to real-world impact, complexity is everything, because real-world resources are limited. A polynomial time algorithm to factor integers is a really big deal, much better than exponential. That's why quantum computers can in theory break all contemporary cryptography. So complexity theory is more
practical than computability theory.
Skepdick wrote: ↑
It misses out on a bunch of qualitative/empirical facts which are part and parcel of the human-computer interactions!
Just as zoology misses out on the fine points of quark confinement. Computability and complexity operate at different levels of discourse. They're concerned with different things.
Skepdick wrote: ↑
Latency/responsiveness being one of them.
Ok. Who is disagreeing with you? Not me, surely. Computability and complexity. Two overlapping but separate ways of looking at computations.