Emergence of self

Is the mind the same as the body? What is consciousness? Can machines have it?

Moderators: AMod, iMod

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Thu Feb 05, 2015 11:22 pm

Wyman wrote:
nonenone wrote:@Ginkgo

"If you are saying there are two separate entities with ALL their properties in common then this might be a problem."

the hypothesis has some tricky aspects. one of them is that

Note that "thinkers" are not direct objects from the outer world. they are "representations of them in the network."
for example we see a boy, Network also sees him, but necessarily not exactly as we see him. so "thinkers" are "representations".

when self emerges, "Network" and "representation of Network" (which is observed by the Network itself) become united.
Look up 'Leibnitz' Law'
I guess in the end this is the core question and I think it is a problem that will always plague AI exponents. In other words the problem of internal functionalism. As discussed with nonenone we start with the idea that an object being observed emits light rays that are absorbed by our eyes. These rays are transferred into electrochemical signals that are processed by the brain. We end up with a "representation" of that object. For some AI exponentsit seems reasonable that the same type of internal functional process will eventually create conscious machines.

I think the problem with this analogy is that stored memory of a computer is not the same as working memory of the human mind. Like the human mind we could say that processing data is not the same as representing the external world. In other words, additional information received during the processing period doesn't necessarily relate back to the initial input. Instead it is just the creation of additional, or auxiliary data.

This is where working memory is different to stored memory. Working memory it is not the recreation of additional data. Prinz claims that when we have properties of a stimulus represented from a particular point of view we get consciousness. That is to say, made available to working memory.

If anyone is interested they can google, "Attended Intermediate-Level Representation Theory". It seems to me that one important difference between possible computer consciousness and human consciousness is that computers can't intermediate information, so computers can never have a sense of "self"

If someone could read Prinz and give me their take on it I would be most interested.

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Fri Feb 06, 2015 12:19 am

@Ginkgo
As i told before, unfortunately i am not professionally related to philosophy and also not a native english speaker. Also add to this the fact that i am so busy. So i actually thought about your comments but still haven't enough time to answer it in details.

But i guess i have to stat from somewhere. So let me start with this:

Brain has some neural activities. These are reason of both conscious and unconscious processes of us.
Now what is it, when we say that we r "conscious"?
Assume a network that "reports" that "it is conscious". How can u deduce that it is not actually conscious, it doesn't understand what it is saying, but just doing some mechanical process that leads to the output "i am conscious"? (Something like Chinese room).
As far as i could think, the difference between "mechanically processing" and "understanding" is that when we say something is "mechanically processing" data, we do not consider that "thing" to have a "self". But we we say something "understands" some data, we unintentionally and automatically assign a "self" to that "thing". It is somehow an experimental observation of mine.
So as the start i assume that when a 3rd person thinks that one object is mechanically processing data but another understands that, it is because he has supposed that the first one doesn't have "self" but the second one has it.
It leads to the conclusion that to build a machine which we can see as conscious, it must be such that "we deduce" that it has "self".
And so the problem starts: what is "self"? My hypothesis was trying to answer that.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Fri Feb 06, 2015 12:29 am

double post
Last edited by Ginkgo on Fri Feb 06, 2015 12:36 am, edited 1 time in total.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Fri Feb 06, 2015 12:35 am

nonenone wrote:@Ginkgo
As i told before, unfortunately i am not professionally related to philosophy and also not a native english speaker. Also add to this the fact that i am so busy. So i actually thought about your comments but still haven't enough time to answer it in details.

But i guess i have to stat from somewhere. So let me start with this:

Brain has some neural activities. These are reason of both conscious and unconscious processes of us.
Now what is it, when we say that we r "conscious"?
Assume a network that "reports" that "it is conscious". How can u deduce that it is not actually conscious, it doesn't understand what it is saying, but just doing some mechanical process that leads to the output "i am conscious"? (Something like Chinese room).
As far as i could think, the difference between "mechanically processing" and "understanding" is that when we say something is "mechanically processing" data, we do not consider that "thing" to have a "self". But we we say something "understands" some data, we unintentionally and automatically assign a "self" to that "thing". It is somehow an experimental observation of mine.
So as the start i assume that when a 3rd person thinks that one object is mechanically processing data but another understands that, it is because he has supposed that the first one doesn't have "self" but the second one has it.
It leads to the conclusion that to build a machine which we can see as conscious, it must be such that "we deduce" that it has "self".
And so the problem starts: what is "self"? My hypothesis was trying to answer that.
I think this is pretty much correct. David Chalmers devised a thought experiment called "The Philosophical Zombie Argument" to express you basic idea. A philosophical zombie is an imaginary person who is like us in every way except they lack conscious experience. So when you ask a philosophical zombie if he is conscious, he will always answer, "yes". As the observer we can only ever assume that a philosophical zombie has conscious experience. Te reality is that, "all is dark inside"

Wyman
Posts: 968
Joined: Sat Jan 04, 2014 2:21 pm

Re: Emergence of self

Post by Wyman » Fri Feb 06, 2015 1:27 am

Ginkgo wrote:
nonenone wrote:@Ginkgo
As i told before, unfortunately i am not professionally related to philosophy and also not a native english speaker. Also add to this the fact that i am so busy. So i actually thought about your comments but still haven't enough time to answer it in details.

But i guess i have to stat from somewhere. So let me start with this:

Brain has some neural activities. These are reason of both conscious and unconscious processes of us.
Now what is it, when we say that we r "conscious"?
Assume a network that "reports" that "it is conscious". How can u deduce that it is not actually conscious, it doesn't understand what it is saying, but just doing some mechanical process that leads to the output "i am conscious"? (Something like Chinese room).
As far as i could think, the difference between "mechanically processing" and "understanding" is that when we say something is "mechanically processing" data, we do not consider that "thing" to have a "self". But we we say something "understands" some data, we unintentionally and automatically assign a "self" to that "thing". It is somehow an experimental observation of mine.
So as the start i assume that when a 3rd person thinks that one object is mechanically processing data but another understands that, it is because he has supposed that the first one doesn't have "self" but the second one has it.
It leads to the conclusion that to build a machine which we can see as conscious, it must be such that "we deduce" that it has "self".
And so the problem starts: what is "self"? My hypothesis was trying to answer that.
I think this is pretty much correct. David Chalmers devised a thought experiment called "The Philosophical Zombie Argument" to express you basic idea. A philosophical zombie is an imaginary person who is like us in every way except they lack conscious experience. So when you ask a philosophical zombie if he is conscious, he will always answer, "yes". As the observer we can only ever assume that a philosophical zombie has conscious experience. Te reality is that, "all is dark inside"
This is all terribly circular. If I said 'How do I tell if the zombie has a heart?' I am assuming that you know what a heart is and that they exist. You can't ask 'How can I tell if something is conscious' unless you tell me what consciousness is. And to say that it is 'self' surely begs the question.

I thought that the brain projected an image on the retina. Besides that image (and other analogous sensations), what is consciousness?

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Fri Feb 06, 2015 2:34 am

double post
Last edited by Ginkgo on Fri Feb 06, 2015 2:36 am, edited 1 time in total.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Fri Feb 06, 2015 2:34 am

Wyman wrote:
This is all terribly circular. If I said 'How do I tell if the zombie has a heart?' I am assuming that you know what a heart is and that they exist. You can't ask 'How can I tell if something is conscious' unless you tell me what consciousness is. And to say that it is 'self' surely begs the question.
Good point. I guess it depends on who you ask.

Chalmers would say consciousness consists of both the "easy problem" and the "hard problem". From the point of view of the physicalist there is only the "easy problem".

Chalmers wants to know how all of this brain processing give rise to subjective experience that accounts for what it feels like to view the world from a first person perspective. Objective accounts by the physicalist, no matter how accurate and precise, always seem to be lacking something. That something being conscious experience.

The problem I see with a third person explanation is that all of this processing is enough to give rise to experience. Like Chalmers I don't see how it does. In order for experience to be a genuine phenomenon the we need a first person.
Wyman wrote:
I thought that the brain reflected an image on the retina. Besides that image (and other analogous sensations), what is consciousness?
Possibly internal representation of an image. We know it cannot actually by that image per se. The idea that all of this sensory information coming into retina is transformed into electrochemical spike trains that eventually end up in a place (neural core of consciousness) where it can be observed by the self is incorrect.

Edited in order to make my point clear. I haven't changed the overall meaning.
Last edited by Ginkgo on Fri Feb 06, 2015 9:20 am, edited 2 times in total.

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Fri Feb 06, 2015 6:32 am

@Wyman and Ginkgo

"This is all terribly circular. If I said 'How do I tell if the zombie has a heart?' I am assuming that you know what a heart is and that they exist. You can't ask 'How can I tell if something is conscious' unless you tell me what consciousness is. And to say that it is 'self' surely begs the question."

This is good question. But actully i don't define "self" by "consciousness". As you may see in my hypothesis, i try to define "self" just by evolution. Let me put it straightforward: i think that for the Network to survive there must emerge an "imaginary object" in the network (i mean an object whose information of existance is just received by the Network and no one else) which i call it "self". All the "consciousness things" start after that. I am trying to find out the scheme in which evolution makes this imaginary object to emerge. I just very roughly guess that if a network can categorize and also able to work in P=>Q logic and has some basic premises, it can be shown that self emerges in the species that will survive.

So I guess if a Network is able to categorize, is able to work in P=>Q logic, reports that it has a self, and can report how its self has emerged using some premises and P=>Q logic and categorization, then we need no further items to conclude that it is really conscious. Because i guess they are just all we can do too.

Wyman
Posts: 968
Joined: Sat Jan 04, 2014 2:21 pm

Re: Emergence of self

Post by Wyman » Fri Feb 06, 2015 3:20 pm

Possibly internal representation of an image. We know it cannot actually by that image per se. The idea that all of this sensory information coming into retina is transformed into electrochemical spike trains that eventually end up in a place (neural core of consciousness) where it can be observed by the self is incorrect.
My description was backwards - the brain 'takes' an image from the retina and works on it. Is this model correct (for discussion purposes, not technically correct science, but consistent with science): The visual field - what we see here and now - is a picture created/manufactured or taken and transformed by the brain/eye and then, if we choose, observed by a different part of the brain.

In other words, can perception be divided into two parts (and be consistent with current science):

1) subconsciously processed picture - i.e. what we see in the present as the retinal image is worked on by various parts of the brain
2) analysis of that picture by the 'conscious' part of the brain

Wyman
Posts: 968
Joined: Sat Jan 04, 2014 2:21 pm

Re: Emergence of self

Post by Wyman » Fri Feb 06, 2015 3:51 pm

nonenone wrote:@Wyman and Ginkgo

"This is all terribly circular. If I said 'How do I tell if the zombie has a heart?' I am assuming that you know what a heart is and that they exist. You can't ask 'How can I tell if something is conscious' unless you tell me what consciousness is. And to say that it is 'self' surely begs the question."

This is good question. But actully i don't define "self" by "consciousness". As you may see in my hypothesis, i try to define "self" just by evolution. Let me put it straightforward: i think that for the Network to survive there must emerge an "imaginary object" in the network (i mean an object whose information of existance is just received by the Network and no one else) which i call it "self". All the "consciousness things" start after that. I am trying to find out the scheme in which evolution makes this imaginary object to emerge. I just very roughly guess that if a network can categorize and also able to work in P=>Q logic and has some basic premises, it can be shown that self emerges in the species that will survive.

So I guess if a Network is able to categorize, is able to work in P=>Q logic, reports that it has a self, and can report how its self has emerged using some premises and P=>Q logic and categorization, then we need no further items to conclude that it is really conscious. Because i guess they are just all we can do too.
I'm trying to understand this. The network is able to 'categorize' - and a category is an imaginary object - like 'redness' which comes only from the process of categorization (i.e. no other networks can 'see' the concept/category/idea). Redness is a separate thing than a red thing, so redness must 'be initiated' by something 'x.' Redness is then categorized as a 'color' and then all these categories are categorized as 'thoughts.' Then all the 'x's' that are associated with each thought are grouped/categorized into a category of 'thinkers' or thought intitiators. At this high level of abstraction, the network has the categories of 'thought' and 'thinker.'

Then the category of 'thinkers' must be initiated by something and an infinite regress occurs unless the network also has another program code which disallows self-reference of categories. The system will crash when it gets to the 'ultimate' source - THINKER - unless it posits its self as that thinker and stops there. Then a wonderful thing occurs, as it realizes that all those 'x's' were itself all along.

Am I anywhere close?

Wyman
Posts: 968
Joined: Sat Jan 04, 2014 2:21 pm

Re: Emergence of self

Post by Wyman » Fri Feb 06, 2015 4:13 pm

Ginkgo wrote:
Wyman wrote:
nonenone wrote:@Ginkgo

"If you are saying there are two separate entities with ALL their properties in common then this might be a problem."

the hypothesis has some tricky aspects. one of them is that

Note that "thinkers" are not direct objects from the outer world. they are "representations of them in the network."
for example we see a boy, Network also sees him, but necessarily not exactly as we see him. so "thinkers" are "representations".

when self emerges, "Network" and "representation of Network" (which is observed by the Network itself) become united.
Look up 'Leibnitz' Law'
I guess in the end this is the core question and I think it is a problem that will always plague AI exponents. In other words the problem of internal functionalism. As discussed with nonenone we start with the idea that an object being observed emits light rays that are absorbed by our eyes. These rays are transferred into electrochemical signals that are processed by the brain. We end up with a "representation" of that object. For some AI exponentsit seems reasonable that the same type of internal functional process will eventually create conscious machines.

I think the problem with this analogy is that stored memory of a computer is not the same as working memory of the human mind. Like the human mind we could say that processing data is not the same as representing the external world. In other words, additional information received during the processing period doesn't necessarily relate back to the initial input. Instead it is just the creation of additional, or auxiliary data.

This is where working memory is different to stored memory. Working memory it is not the recreation of additional data. Prinz claims that when we have properties of a stimulus represented from a particular point of view we get consciousness. That is to say, made available to working memory.

If anyone is interested they can google, "Attended Intermediate-Level Representation Theory". It seems to me that one important difference between possible computer consciousness and human consciousness is that computers can't intermediate information, so computers can never have a sense of "self"

If someone could read Prinz and give me their take on it I would be most interested.
I'm going to read Prinz. I've been immersed in Rorty and Davidson lately. Neither say much about consciousness. My suspicion is that Chalmers may underestimate the 'ease' of the first set of problems and that theory of meaning and epistemology involves the 'hard problem' reformulated in different ways. I.e. Chalmers is presenting old philosophical problems wrapped up in new clothing. But this is just a conjecture, as I haven't read Chalmers(I did read some Searle on the topic though and found nothing new).

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Fri Feb 06, 2015 4:58 pm

@wyman

Let me make it more clear:

A- How network goes into the loop is as follows:

1- Network can categorize "redness","if a then b", "he is here" and all others with some format, as "though".
2- For any thought, it searches for an object inside the category of "thinkers".

For instance it categorizes "it is red" as "thought A". Then is searches for a thinker for "though A" and assigns, for example, "thinker B" to that. So it reaches the conclusion that "thought A has been initiated by thinker B".

3- Now it automatically categorizes "thought A has been initiated by thinker B", as a new "thought", let's say "thought C". And 2 and then again 3 and again and again.

B- How the Network can by-pass the loop:

I am trying to figure it out in details but as for now what i can say is that:

The loop which was described in section A is the result of the fact that due to having no "self", Network works as if it is a 3rd person observer in relation to "thoughts" and "thinkers". It is wrong.
Network has initiated both "thoughts" and (representations of) "thinkers", because thoughts are internal codes of the Network and thinkers are also codes (representations) inside it. But in this level Network doesn't have "self" so it can not logically assign them to "itself". Therefore they are processed as if Network is a 3rd person observer in relation to them. This makes the loop.

The problem is solved when (due to some mutation-like process) an "imaginary object" emerges (i guess you correctly said "wonderfully", i have the same sense, but it actually could be gradually too) with some properties that using them Network can by-pass the loop. I consider this imaginary object as "self".

For example it could be introduced by a new premise inside the network. For example , very very primitive example would be, "there is an object which has initiated every thoughts, before and after". This premise can supersede the first premise (for every though there must be a thinker) and so loop is avoided.

But second premise still has some flaws. As i told i am trying to figure out the details. But, for me, the total picture is as i tried to explain.

User avatar
Arising_uk
Posts: 11939
Joined: Wed Oct 17, 2007 2:31 am

Re: Emergence of self

Post by Arising_uk » Fri Feb 06, 2015 10:16 pm

roydop wrote:Ah, I see. Well that's a tough one because matter emerges from self, not the other way around. ...
Show me a self that comes before the matter?

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Sat Feb 07, 2015 7:53 am

nonenone wrote:@Wyman and Ginkgo

"This is all terribly circular. If I said 'How do I tell if the zombie has a heart?' I am assuming that you know what a heart is and that they exist. You can't ask 'How can I tell if something is conscious' unless you tell me what consciousness is. And to say that it is 'self' surely begs the question."

This is good question. But actully i don't define "self" by "consciousness". As you may see in my hypothesis, i try to define "self" just by evolution. Let me put it straightforward: i think that for the Network to survive there must emerge an "imaginary object" in the network (i mean an object whose information of existance is just received by the Network and no one else) which i call it "self". All the "consciousness things" start after that. I am trying to find out the scheme in which evolution makes this imaginary object to emerge. I just very roughly guess that if a network can categorize and also able to work in P=>Q logic and has some basic premises, it can be shown that self emerges in the species that will survive.

So I guess if a Network is able to categorize, is able to work in P=>Q logic, reports that it has a self, and can report how its self has emerged using some premises and P=>Q logic and categorization, then we need no further items to conclude that it is really conscious. Because i guess they are just all we can do too.
How about saying that in order to by-pass the algorithmic loop one needs a non-algorithmic computation to evolve? This is where some of the thinking is at the moment.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Sat Feb 07, 2015 8:11 am

Wyman wrote:
Possibly internal representation of an image. We know it cannot actually by that image per se. The idea that all of this sensory information coming into retina is transformed into electrochemical spike trains that eventually end up in a place (neural core of consciousness) where it can be observed by the self is incorrect.
My description was backwards - the brain 'takes' an image from the retina and works on it. Is this model correct (for discussion purposes, not technically correct science, but consistent with science): The visual field - what we see here and now - is a picture created/manufactured or taken and transformed by the brain/eye and then, if we choose, observed by a different part of the brain.

In other words, can perception be divided into two parts (and be consistent with current science):

1) subconsciously processed picture - i.e. what we see in the present as the retinal image is worked on by various parts of the brain
2) analysis of that picture by the 'conscious' part of the brain

"If we choose, observed by a different part of the brain" It is the idea of "observed" that is the problem. There is nothing that we can determine that does the observing. The science tells us that consciousness is disunified.

http://www.wikipedia/wiki/Cartesian_theater

Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests