Emergence of self

Is the mind the same as the body? What is consciousness? Can machines have it?

Moderators: AMod, iMod

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Sat Feb 07, 2015 8:55 am

@Ginkgo

"How about saying that in order to by-pass the algorithmic loop one needs a non-algorithmic computation to evolve? This is where some of the thinking is at the moment."

1. what is non-algorithmic? do you mean that the algorithmic loop doesn't exist in the first place? if so, what do you propose to be rationale of emergence of "self"? mine was to by-pass the loop.
2. I strongly believe that the process must be logical (logic as we know) with some premises. it is crucial to me because I think that when we want to evaluate whether if a machine has "self" or not, we must follow what i mentioned:

"So I guess if a Network is able to categorize, is able to work in P=>Q logic, reports that it has a self, and can report how its self has emerged using some premises and P=>Q logic and categorization, then we need no further items to conclude that it is really conscious. Because i guess they are just all we can do too."

Wyman
Posts: 968
Joined: Sat Jan 04, 2014 2:21 pm

Re: Emergence of self

Post by Wyman » Sat Feb 07, 2015 2:33 pm

Ginkgo wrote:
Wyman wrote:
Possibly internal representation of an image. We know it cannot actually by that image per se. The idea that all of this sensory information coming into retina is transformed into electrochemical spike trains that eventually end up in a place (neural core of consciousness) where it can be observed by the self is incorrect.
My description was backwards - the brain 'takes' an image from the retina and works on it. Is this model correct (for discussion purposes, not technically correct science, but consistent with science): The visual field - what we see here and now - is a picture created/manufactured or taken and transformed by the brain/eye and then, if we choose, observed by a different part of the brain.

In other words, can perception be divided into two parts (and be consistent with current science):

1) subconsciously processed picture - i.e. what we see in the present as the retinal image is worked on by various parts of the brain
2) analysis of that picture by the 'conscious' part of the brain

"If we choose, observed by a different part of the brain" It is the idea of "observed" that is the problem. There is nothing that we can determine that does the observing. The science tells us that consciousness is disunified.

http://www.wikipedia/wiki/Cartesian_theater
I'm familiar with the Meditations. But there lies the problem (not the Meditations, but how the higher functions relate to the lower).

I read the first 120 pages of Prinz's book (terrible cover, by the way) and I'm looking forward to the second half where it looks like he gets into the philosophical implications, if any, of his theory.

Here's my problem. I agree that the stimuli goes from the retina to a 'lower' level part of the brain for processing. This is what I have always thought of as the 'subconscious' or 'unconscious' part of perception. At some point, 'consciousness happens.' I also call this 'experience' or 'perception.' Prinz is very interested in finding the physical 'place' in the brain where consciousness occurs. For theory of knowledge, I think philosophers are interested not so much in discovering the location of consciousness in the brain, but the metaphorical 'location' of conscious perception in the hierarchy (if there is one) of thought. However, since Prinz places different 'levels' of of thought in different places in the brain, both inquiries amount to the same thing. (Aside: what if all the areas of the brain with differing functions were just jumbled up in a mishmash? Then, there would still be different levels of processing, just not different physical 'places' - one could easily imagine such a scenario, couldn't they?)

Anyway, Prinz places 'conscious awareness' -perception - at the 'intermediate' level of processing. That is, after pixelated images become 2 1/2 dimension images. I think this is about right. But I was looking for him to relate the higher level processing to that intermediate level and haven't found it yet. That is the difficult part, it seems to me, and that is what relates to the above problem of 'observing' the perception.

The higher functions - object oriented, 3d, abstraction, etc. - must interact with perception in some way. Descartes' model of an inner eye was a description of that interaction. Now, Prinz says that 'attention' is also a necessary condition for consciousness, along with the intermediately processed picture. I have no reason to disagree, especially as he characterizes 'attention' as the opening up of the perception to 'working memory.' To me, however it is characterized (no one wants to be caught dead advocating for the
Cartesian model), placing the percept in an area of the brain to be 'worked on' by the higher level functions while held there by something called 'attention' - is basically vindicating the Cartesian model. The fact that the 'mind's eye' is just a physical part of the brain doesn't really change the model - it's just not the pineal gland.

But I read the first part quickly. Let me know your thoughts on what I have right and what I have wrong.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Sat Feb 07, 2015 10:34 pm

nonenone wrote:@Ginkgo

"How about saying that in order to by-pass the algorithmic loop one needs a non-algorithmic computation to evolve? This is where some of the thinking is at the moment."

1. what is non-algorithmic? do you mean that the algorithmic loop doesn't exist in the first place? if so, what do you propose to be rationale of emergence of "self"? mine was to by-pass the loop.
2. I strongly believe that the process must be logical (logic as we know) with some premises. it is crucial to me because I think that when we want to evaluate whether if a machine has "self" or not, we must follow what i mentioned:

"So I guess if a Network is able to categorize, is able to work in P=>Q logic, reports that it has a self, and can report how its self has emerged using some premises and P=>Q logic and categorization, then we need no further items to conclude that it is really conscious. Because i guess they are just all we can do too."
Algorithms do exist and a computer can make use of them. What I am suggesting is that some aspect of human thinking can be analogous to computer problem solving , but not all human thinking can be accommodated using the algorithmic model. It is true that computers and humans can follow step by step rules in order to arrive at a solution, but only humans can ever understand those rules. It is the understanding of the rules that in non-computational.

I just wish I could think of a good example. The only thing I can think of at this stage is Godel's incompleteness.
Last edited by Ginkgo on Sun Feb 08, 2015 1:34 am, edited 3 times in total.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Sat Feb 07, 2015 10:44 pm

Wyman wrote:
I read the first 120 pages of Prinz's book (terrible cover, by the way) and I'm looking forward to the second half where it looks like he gets into the philosophical implications, if any, of his theory.

Here's my problem. I agree that the stimuli goes from the retina to a 'lower' level part of the brain for processing. This is what I have always thought of as the 'subconscious' or 'unconscious' part of perception. At some point, 'consciousness happens.' I also call this 'experience' or 'perception.' Prinz is very interested in finding the physical 'place' in the brain where consciousness occurs. For theory of knowledge, I think philosophers are interested not so much in discovering the location of consciousness in the brain, but the metaphorical 'location' of conscious perception in the hierarchy (if there is one) of thought. However, since Prinz places different 'levels' of of thought in different places in the brain, both inquiries amount to the same thing. (Aside: what if all the areas of the brain with differing functions were just jumbled up in a mishmash? Then, there would still be different levels of processing, just not different physical 'places' - one could easily imagine such a scenario, couldn't they?)

Anyway, Prinz places 'conscious awareness' -perception - at the 'intermediate' level of processing. That is, after pixelated images become 2 1/2 dimension images. I think this is about right. But I was looking for him to relate the higher level processing to that intermediate level and haven't found it yet. That is the difficult part, it seems to me, and that is what relates to the above problem of 'observing' the perception.

The higher functions - object oriented, 3d, abstraction, etc. - must interact with perception in some way. Descartes' model of an inner eye was a description of that interaction. Now, Prinz says that 'attention' is also a necessary condition for consciousness, along with the intermediately processed picture. I have no reason to disagree, especially as he characterizes 'attention' as the opening up of the perception to 'working memory.' To me, however it is characterized (no one wants to be caught dead advocating for the
Cartesian model), placing the percept in an area of the brain to be 'worked on' by the higher level functions while held there by something called 'attention' - is basically vindicating the Cartesian model. The fact that the 'mind's eye' is just a physical part of the brain doesn't really change the model - it's just not the pineal gland.

But I read the first part quickly. Let me know your thoughts on what I have right and what I have wrong.
I think you have a good summary. When I get some time I will reply.

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Sun Feb 08, 2015 10:18 am

@Ginkgo

"but only humans can ever understand those rules. It is the understanding of the rules that in non-computational."

i guess it is not true. what is understating? as i told you before the difference between "understanding" and "mechanistically processing" data is that when you "think" something "understands" you assume that is does have "self" but when you "think" something "mechanically processes" data you assume that it doesn't have "self".

so the problem is if you "assume" that a thing does have "self" or not. if you can assume a machine with "self" you will see that you automatically conclude that it can understand.
so the argument shouldn't be "machines can not understand", it should be "machines can not have self". and i cant see why the second argument should be true?

put it more straightforward, if a machine can "make you to think that it does have self" then you will "think" it can understand. (that is i guess the Turing machine?")

so please note that the criteria not only depends on the machine, but it also depends on "assumption of the observer". the tricky part is here. with contemplating on what is "self" and "how it can emerge" not only we try to build a machine with a self, but we also try to justify to ourselves "what a self could be".
when you have the premises that "machines can not have self", the Turing machine will fails, because of the premise of the observer.

what i say that this "self" can emerge within an algorithmic machine.

Wyman
Posts: 968
Joined: Sat Jan 04, 2014 2:21 pm

Re: Emergence of self

Post by Wyman » Sun Feb 08, 2015 2:05 pm

nonenone wrote:@Ginkgo

"but only humans can ever understand those rules. It is the understanding of the rules that in non-computational."

i guess it is not true. what is understating? as i told you before the difference between "understanding" and "mechanistically processing" data is that when you "think" something "understands" you assume that is does have "self" but when you "think" something "mechanically processes" data you assume that it doesn't have "self".

so the problem is if you "assume" that a thing does have "self" or not. if you can assume a machine with "self" you will see that you automatically conclude that it can understand.
so the argument shouldn't be "machines can not understand", it should be "machines can not have self". and i cant see why the second argument should be true?

put it more straightforward, if a machine can "make you to think that it does have self" then you will "think" it can understand. (that is i guess the Turing machine?")

so please note that the criteria not only depends on the machine, but it also depends on "assumption of the observer". the tricky part is here. with contemplating on what is "self" and "how it can emerge" not only we try to build a machine with a self, but we also try to justify to ourselves "what a self could be".
when you have the premises that "machines can not have self", the Turing machine will fails, because of the premise of the observer.

what i say that this "self" can emerge within an algorithmic machine.
There's a 20th century philosopher (I forget his name) who said that believing is rule-following - a belief is a rule of action. I think that rule following is learning an activity, which ties in to my much maligned thesis (on another thread) that knowledge should be viewed in the context of knowing how, rather than knowing that.

I think what you are looking for is the criterion for 'understanding.' A great many philosophers have tried to capture that - Wittgenstein, Quine, Rorty, Davidson all explore what it would mean for us to know (what criteria would satisfy) whether another 'being' understands something (usually language) the same way as we do. So they posit aliens or robots, or recently discovered human tribes and ask whether we could ever know what 'they' are thinking.

They all conclude that we cannot know for sure. Quine was famous for his thesis of the 'indeterminacy of translation' whereby it is logically impossible to translate two languages with complete accuracy. This destroyed the concept of synonymy and therefore the distinction between analytic and synthetic truths (but that's a long story).

Wittgenstein had a famous example where he imagined everyone to possess a 'beetle in a box.' If everyone used the word 'beetle' in the same manner, it would be impossible to conclude for certainty whether anyone had the same thing in their box by merely observing their behavior. This ties in with his idea that language is like a 'game,' consisting of following rules - he spends a great part of the 'Philosophical Investigations' exploring what 'rule following' behavior consists of.

Rorty used the example of aliens, called 'antipodeans' whose behavior is indistinguishable from humans and asked the question whether we could ever know if they had 'minds and mind-associated concepts such as 'understanding'' - i.e. whether looking for such criteria even makes sense.

In sum, this all simply means that you are on the right track, as you are focusing in on right issues. Although Wittgenstein and Rorty would say that you are on the wrong track and you should just forget about it before you spend your life in worthless pursuits. If you are interested in pursing the subject, Wittgenstein is the most accessible and brilliant in his writings.

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Sun Feb 08, 2015 3:51 pm

@Wyman

thank you for your help. but let me ask a question. are you sure that it is just about "understanding"?
for example take these sentences:

1- I saw the boy.
2- Robot saw the boy.
3- the mechanical device saw the boy.

from my experience, i would say that in all three cases you unintentionally will assign some kind of "self" to I, robot and mechanical device. that is because "seeing" needs to be performed by something which has "self".

so i am not particularly seeking what is the meaning of "understating". i assume that when we use some certain verbs (understand, see, feel,...) we presume that they are performed by an object that has "self".

based on that, i assumed that due to survival issue a concept will emerge which is called "self" and when we (as machines) will detect that concept in an object we say that that objects "thinks" "sees" "feels", not "mechanically processes". so the understating, in my view, is as secondary issue. "self" is the prior item.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Sun Feb 08, 2015 9:24 pm

nonenone wrote:@Ginkgo

"but only humans can ever understand those rules. It is the understanding of the rules that in non-computational."

i guess it is not true. what is understating? as i told you before the difference between "understanding" and "mechanistically processing" data is that when you "think" something "understands" you assume that is does have "self" but when you "think" something "mechanically processes" data you assume that it doesn't have "self".

so the problem is if you "assume" that a thing does have "self" or not. if you can assume a machine with "self" you will see that you automatically conclude that it can understand.
so the argument shouldn't be "machines can not understand", it should be "machines can not have self". and i cant see why the second argument should be true?

put it more straightforward, if a machine can "make you to think that it does have self" then you will "think" it can understand. (that is i guess the Turing machine?")

so please note that the criteria not only depends on the machine, but it also depends on "assumption of the observer". the tricky part is here. with contemplating on what is "self" and "how it can emerge" not only we try to build a machine with a self, but we also try to justify to ourselves "what a self could be".
when you have the premises that "machines can not have self", the Turing machine will fails, because of the premise of the observer.

when I say that this "self" can emerge within an algorithmic machine.

You seem to be saying that if I think or believe that my computer has a sense of self then it does.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Sun Feb 08, 2015 9:43 pm

nonenone wrote:@Wyman

thank you for your help. but let me ask a question. are you sure that it is just about "understanding"?
for example take these sentences:

1- I saw the boy.
2- Robot saw the boy.
3- the mechanical device saw the boy.

from my experience, i would say that in all three cases you unintentionally will assign some kind of "self" to I, robot and mechanical device. that is because "seeing" needs to be performed by something which has "self".

so i am not particularly seeking what is the meaning of "understating". i assume that when we use some certain verbs (understand, see, feel,...) we presume that they are performed by an object that has "self".
Perhaps it might work better if you replaced "seeing" with the work "experiencing". You could then say a robot can be programmed to respond to seeing a boy. The immediate would be that the robot cannot "experience" the boy. Experiencing and seeing are not the same. You could counter by saying that if the robot can "act" as though it is experiencing the boy then there no difference between the correct programmed response (behaviour) and actual experience.

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Mon Feb 09, 2015 7:10 am

@Ginkgo
"You seem to be saying that if I think or believe that my computer has a sense of self then it does."

if you have a computer that makes you believe that is has a self, and if you are considered a typical human being, then we can say that your computer has someting that makes many people to think that it has self.
if a device has something that makes higher level species to believe that is has self, then yes, it may pretty have self.

"Perhaps it might work better if you replaced "seeing" with the work "experiencing". You could then say a robot can be programmed to respond to seeing a boy. The immediate would be that the robot cannot "experience" the boy. Experiencing and seeing are not the same. You could counter by saying that if the robot can "act" as though it is experiencing the boy then there no difference between the correct programmed response (behaviour) and actual experience."

my point was not to say that robot can actually be programmed to see or experience. my point was that we as humans seems to be programmed such that when we say something "sees" thing, we automatically attach some kind of self to that. it is an experimental claim.

Wyman
Posts: 968
Joined: Sat Jan 04, 2014 2:21 pm

Re: Emergence of self

Post by Wyman » Mon Feb 09, 2015 3:14 pm

nonenone wrote:@Ginkgo
"You seem to be saying that if I think or believe that my computer has a sense of self then it does."

if you have a computer that makes you believe that is has a self, and if you are considered a typical human being, then we can say that your computer has someting that makes many people to think that it has self.
if a device has something that makes higher level species to believe that is has self, then yes, it may pretty have self.

"Perhaps it might work better if you replaced "seeing" with the work "experiencing". You could then say a robot can be programmed to respond to seeing a boy. The immediate would be that the robot cannot "experience" the boy. Experiencing and seeing are not the same. You could counter by saying that if the robot can "act" as though it is experiencing the boy then there no difference between the correct programmed response (behaviour) and actual experience."

my point was not to say that robot can actually be programmed to see or experience. my point was that we as humans seems to be programmed such that when we say something "sees" thing, we automatically attach some kind of self to that. it is an experimental claim.
I think that's the logical conclusion of materialism. There is nothing that separates: experiencing from seeing, understanding from calculating, mental from physical - except our concepts of these things.

The argument that consciousness is a 'thing' sounds like this to a materialist:

When I am sad, I have a particular 'feeling' or range of feelings in my body. Nothing else feels quite like 'sadness' when I have it. Therefore, there must be something called 'sadness' that emerges from the physical parts of my body.

Consciousness is the same if you substitute it for 'sadness' or any other 'mental' state. The materialist does not buy this argument and accuses you of being a Platonist.

Rather, since the self (consciousness) is but a concept we have about ourselves, goes the materialist's counter-argument, it does not 'exist' separately from our physical body any more than any other type of concept. And of course concepts like these can be applied to others (even machines) as well as ourselves.

On this reading, nonenone, I don't see any mystery in how the concept of self came about. It is just a concept. It is perhaps a mystery how conceptualization came about in the evolutionary process.

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Tue Feb 10, 2015 11:06 am

@Wyman

"On this reading, nonenone, I don't see any mystery in how the concept of self came about. It is just a concept. It is perhaps a mystery how conceptualization came about in the evolutionary process."

what is ur theory of how it came about?

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Tue Feb 10, 2015 11:19 am

nonenone wrote:
"On this reading, nonenone, I don't see any mystery in how the concept of self came about. It is just a concept.

Noneone, you no doubt realize that even proponents of strong AI are not saying that a computer has a concept, or any other such idea that explains consciousness? This of course does not exclude the possibility that consciousness will emerge from machines in the future. That is to say, given the fact they will become more complex.
noneone wrote:
It is perhaps a mystery how conceptualization came about in the evolutionary process.

If you are wanting to know how the mind creates categories then this is not a complete mystery.

nonenone
Posts: 19
Joined: Sun Jan 25, 2015 1:42 pm

Re: Emergence of self

Post by nonenone » Tue Feb 10, 2015 12:49 pm

@Ginkgo

Hmmm...actually i didn't say those quotes, Wyman did.

but as for my position let me make it clear:Of course now computers do not have self. but what i am theorizing is that they can have one, if they are subjected to the evolutionary process that i described at the first post. i simply mean that they hypothetically can say "i have self", can explain "logically" how it emerged, and that explanation can be justified to us. and furthermore we may realize that we are just doing the same thing as we say we have "selves". so the gap between us and computers may disappear in this regard.

Ginkgo
Posts: 2484
Joined: Mon Apr 30, 2012 2:47 pm

Re: Emergence of self

Post by Ginkgo » Tue Feb 10, 2015 9:12 pm

nonenone wrote:@Ginkgo

Hmmm...actually i didn't say those quotes, Wyman did.

Not again. I can't believe that after misquoting Trixie, I've done exactly the same thing again. Might be time for me to retire.
Sorry about that.
nonenone wrote: but as for my position let me make it clear:Of course now computers do not have self. but what i am theorizing is that they can have one, if they are subjected to the evolutionary process that i described at the first post. i simply mean that they hypothetically can say "i have self", can explain "logically" how it emerged, and that explanation can be justified to us. and furthermore we may realize that we are just doing the same thing as we say we have "selves". so the gap between us and computers may disappear in this regard.
So are you saying this could happen given an alternative approach to evolution?

Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests