I understand all of that. Hence why I keep referencing the wikipedia page on space-time trade offs.wtf wrote: ↑Fri Feb 14, 2020 11:26 pm Let me emphasize something I forgot to mention the other day about lazy evaluation. Lazy evaluation is anoptimization technique. It's a clever way to reduce the use of resources in a given computation. We don't evaluate an expression till we need to. There's a complementary technique called memoization, which is that once we evaluate an expression we save the result so that we don't have to calculate it again. Computer science is full of these kinds of optimizations for the purpose of using real world resources (time, space, energy) more efficiently.

Just like you can choose to optimise a "mathematical universe" for consistency OR completeness, but not both.

Likewise, you can choose to optimise a computation for space OR time, but not both!

That is how I interpret dichotomies. They present us with forks in the road - choices. And in so far as I am choosing X over Y I say that optimising for X.

What I am further claiming is that Mathematics has chosen to optimise for constant time complexity at the cost of infinite space complexity. Perhaps implicitly rather than explicitly but you are doing it. You insist that everything is timeless.

This is 100% compatible with the language of complexity theory. All operations/functions/calculations in the Mathematical universe are constant e.g their time-complexity is O(1). Constant time. Which is zero time. But you can't choose zero time AND zero-space! If Maths is timeless then it's spaceful.

If Maths is consistent then it's incomplete etc. etc.

It's as trivial as imagining that all Mathematical functions have been pre-computed and memoised somewhere - that's what a topological space is (in my view anyway).

And so in getting the answer for f(x) = x^2 you just have to "look it up in a table (space):

x | x ^2

---------

1 | 1

2 | 4

3 | 9

...

...

In so far as I am treating a "topological space" as a "lookup table with 0(1) time complexity" - it's a perfect hash function

Obviously - you have infinite memory. Infinite tape. By the very definition of a Turing Machine.

if ALL functions/operations/calculations took 0-seconds to compute then laziness/strictness distinction becomes meaningless.

The Oracle Machine knows everything right now.

It's only a meaningful distinction if you recognise time e.g if you recognise that Turing machines perform instructions/operations.

Moving the head 1000 times is not the same thing as moving the head once.

Yes, but that's only one side of the coin. The other side is the memoization aspect. When I want the N-th digit of pi the input length iswtf wrote: ↑Fri Feb 14, 2020 11:26 pm What you are talking about iscomplexity: how efficiently a computation runs as a function of the input length. As the inputs get longer, does the resource consumption go up as a polynomial or an exponential of the input length? That's a huge deal in computer science these days.

**always**1.

The table then looks like:

x | pi-digit

------------

1 | 3

2 | 1

3 | 4

4 | 1

5 | 5

......

The question becomes about the efficiency of your Turing Machine in accessing ALL of its memory/tape.

You are assuming that reading the first cell and reading the N-th cell takes "exactly the same number of operations".

The practical distinction here is between sequential vs random access memory. O(N) read time, or O(1) read time? Linked list or hash map?

If you are telling me that your Turing Machine can give you the answer to the N-th digit of pi in constant (zero) time, then you are necessarily telling me that your Turing Machine has random access memory.

But the Turing Machine doesn't have random access memory. It has sequential access memory. Move Head Left/Move Head Right. It's a linked list.

So which model of computation are you using then?

This is where I think we keep missing each other. If your Turing Machine had finite tape (e.g you didn't pre-suppose infinite memory resources - infinite spaces) there will be direct implication on "computability".

I don't need infinite space OR infinite time to

**represent**"ALL positive integers" here is a Python algorithm that would

**generate**it for me.

Code: Select all

```
ℕ = lambda x=0: ℕ(x+1)
```

Now, if I run this algorithm on a real computer, I soon get a "recursion depth exceeded" error - obviously. because finite memory.

But if I run this algorithm on a Turing Machine with infinite tape, then it produces EXACTLY the natural numbers.

So whenever you say that you are doing transformations on the Natural numbers, I am hearing "you have a function which takes N as input".

But ℕ is a function that doesn't halt?

I guess my real question is one of "high order types" do you see ℕ as a space or as a function?