This is moot. The process need not be recursive and it doesn't matter whether that which triggers the branching is called 'thought' or 'randomness'.Zelebg wrote: ↑Tue Nov 05, 2019 12:16 am Execution of specific functions goes step by step. But what functions will run, with what parameters and when, can be triggered and varies relative not only to external events, but since the process is recursive, change of parameters and function branching may be triggered by the "thought" process itself.
Sufficient to make my point - indeed. You can't arrive at goals through analysis. You can only arrive at sub-goals via tactics and strategy towards pursuing your primary goal.
Human understanding of the world hits a wall at game theory semantics. The scientific goal is to 'build accurate predictive models of reality'. The philosophical goal is to discover wisdom, pursue Truth.
And it all falls apart when you ask "Why?"
This is just optimisation.
All you have done is substituted one language for another. Instead of talking about input (senses), output (actions), emotions (hardware interrupts), experience (sense-data processing). You haven't really said anything. Whether you develop a model of consciousness in Russian or English is moot. Understanding is not about language.Zelebg wrote: ↑Tue Nov 05, 2019 12:16 am Computation in itself is not relevant. We are talking about "something" experiencing that computation, sensation, or emotion. We are talking about sentience, which means "subjective experience", and it is the "subjective" part of it, or 'qualia', or 'consciousness', that the title of this thread is asking about.
Hardware and interface is about input/outputs. Broadly - that's still about computation.
I don't think you have a solid conception of what an 'explanation' or a 'problem' is. Your are demonstrating a very inconsistent metaphysic when reasoning about these things.Zelebg wrote: ↑Tue Nov 05, 2019 12:16 am It seems, as far as we know, this "hardware and interface" that could fit the bill here and explain this _subjective_ phenomena, has no parallel in any of our sciences, except science fiction. Seriously, some kind of "dream" of the type 'Total Recall' or 'The Matrix' are the only kind of mechanics we know of that could, at least in principle, address this problem.
The models of computation/computers we have invented are the best models of what the human mind does. Computers (both quantum and classical) are a reflection (expression?) of self. They are obviously incomplete and of course, you could always argue that they are purely quantitative but if quantitative models are sufficient for the invention of AI that passes the Turing test and is indistinguishable from you and I (and only time will tell whether that is the case), you are going to have to figure out how to wipe the egg off your face for having ever stood behind the notion of 'qualia'.
Daniel Dennett has thought experiment called "Two Black Boxes" ( https://philpapers.org/rec/DENTBB ) - most of philosophy tends to disregard it, because it clearly demonstrate how "what humanity cares about" and "what philosophers care about" are terribly mis-aligned categories.