I guess in the end this is the core question and I think it is a problem that will always plague AI exponents. In other words the problem of internal functionalism. As discussed with nonenone we start with the idea that an object being observed emits light rays that are absorbed by our eyes. These rays are transferred into electrochemical signals that are processed by the brain. We end up with a "representation" of that object. For some AI exponentsit seems reasonable that the same type of internal functional process will eventually create conscious machines.Wyman wrote:Look up 'Leibnitz' Law'nonenone wrote:@Ginkgo
"If you are saying there are two separate entities with ALL their properties in common then this might be a problem."
the hypothesis has some tricky aspects. one of them is that
Note that "thinkers" are not direct objects from the outer world. they are "representations of them in the network."
for example we see a boy, Network also sees him, but necessarily not exactly as we see him. so "thinkers" are "representations".
when self emerges, "Network" and "representation of Network" (which is observed by the Network itself) become united.
I think the problem with this analogy is that stored memory of a computer is not the same as working memory of the human mind. Like the human mind we could say that processing data is not the same as representing the external world. In other words, additional information received during the processing period doesn't necessarily relate back to the initial input. Instead it is just the creation of additional, or auxiliary data.
This is where working memory is different to stored memory. Working memory it is not the recreation of additional data. Prinz claims that when we have properties of a stimulus represented from a particular point of view we get consciousness. That is to say, made available to working memory.
If anyone is interested they can google, "Attended Intermediate-Level Representation Theory". It seems to me that one important difference between possible computer consciousness and human consciousness is that computers can't intermediate information, so computers can never have a sense of "self"
If someone could read Prinz and give me their take on it I would be most interested.