Mind and Artificial Intelligence: A Dialogue

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
A person cannot feel all emotions simultaneously. Therefore, the mind creates a hierarchical structure, and prioritizes, to reduce the most critical need first. Sometimes, drinking water is more important than going to sleep, for example.

This theory tells us that consciousness arises because subjective experience helps an organism survive. This concept is very close to what Sigmund Freud called ‘drive’. Freud thought of drive as the extent to which the mind works to maintain a balanced bodily state.
And what does this clearly revolve around? The fact that while we come into the world hard-wired biologically to embody all manner of feelings...emotions ranging from joy to despair, from delight to rage, from love to hate...it is our actual personal experiences rooted existentially in dasein, given a particular life, that is likely to account for the bulk of our emotional reactions to the world around us. And, thus, the reason why in regard to the very same sets of circumstances we might react in very different ways. Then the objectivists among us who basically demand of others that they react as they do. The most "rational" or "natural" reaction.

With homo sapiens however the drives are flooded with memes. Something that no other animal contends with. But: your memes or my memes? Their memes or our memes? All the while the memes themselves are unfolding in a world teeming with contingency, chance and change. Something other animals react to based solely on their drives and instincts. There are no anthropologists or sociologists or psychologists or political scientists or philosophers even among other mammals.

Ah, but what of a replicant? What of artificial intelligence. What of machines programming other machines? What of emotions and drives and instincts then?

This?
We now have all the necessary puzzle pieces to explain why K is conscious. K is a replica of a human, and thus has a human-like body. Following materialism, this gives us the first clue that K must be conscious, just as humans are because of our bodies. Further, since his body is prone to threats, K seeks to survive. He needs consciousness to deal with uncertainty and threats from the environment. Positive and negative emotions guide K to help him determine whether his steps to reduce the danger are sufficient.
So, unlike with the cyborgs/terminators who were considerably more robotic, replicants mimic human beings in having something analogous to emotions. But to what extent are these emotions their own? Or, for that matter, given determinism as some understand it, to what extent are our own emotions truly our own?
Throughout the film, it’s evident that K has feelings similar to a human. He feels various emotions, such as sadness, rage, hope, and happiness. And at the end, K also does a most human thing. After K has helped Rick Deckard (Harrison Ford) reunite with his daughter, K lays on the stairs in front of her workplace, slowly passing away. If minds need to survive, then there’s nothing more profound to prove that K is conscious than to see him as his mind fades away.
Unfortunately, K is still just a character in a work of science fiction. Like the cyborgs, the terminators or the mechas. So far in "real life" it hasn't reached the point where we can interact with machine intelligence much beyond things like chatbots.

Thus in dealing with things like "passing away", we have only our own intelligence and out own emotions to fall back on.

Again, the part I root -- existentially -- in dasein. The part others root -- supernaturally? -- in God and religion.
User avatar
Agent Smith
Posts: 1442
Joined: Fri Aug 12, 2022 12:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Agent Smith »

:mrgreen:

Are you Tom?

Yeah, that's me! I mean I'm not the only Tom, hehe, but yes Tom, that's my name!

Get in the car.

But, but, ...

Shaddup and get in the car!

But, but, ...

In the car, now!!

Ok, ok!

Do you know ...?

I know this, kidnapping is a felony offense and if you guys kill me, you're looking at a thousand years in the slammer or, what is more likely, a needle in the arm and it's goodbye Kansas for the two ... er ... three of you.

Hahaha! Is this guy for real?

Take us to the house!

I'm calling my mom!

Gimme that!!
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
A Glimpse Into The Future

Some readers here might ask an excellent question: Why does this even matter? What is the point of knowing whether K is conscious or not?
Perhaps because it prompts us to think about our own consciousness. We wonder if any future replicants will become so increasingly like us that they choose to actually to go to war with us because we are...different? But that brings into focus such things as testosterone. War is surely linked biologically to that in men. Will there ever be the AI equivalent of it?

And then the part where we can never really be sure if our own intelligence itself is to nature what replicants are to us. We program replicants like nature programmed us. We are no more truly free than they are unless "somehow" in the evolution of lifeless matter into living matter we acquired autonomy "on our own" sans God.
Blade Runner 2049 shows us the importance of comprehending consciousness in AI research. It is even now becoming unclear whether AI is sentient or not. An AI has already been created that claims to be conscious: this is the recent case of the chatbot LaMDA, which claimed to feel happiness and sadness from time to time.
So, has that been settled once and for all? And happy or sad in regard to what? And what if a second chatbot is happy about what the first chatbot is sad about? Will there be the equivalent of the philosopher-king chatbot to settle once and for all how all chatbots ought to feel?

Then "theoretically" let the speculation and conjecture begin...
You might argue that LaMDA is conscious. However, Chalmers created a thought experiment that depicts the level of problems we have when assessing whether another organism is conscious or not. The thought experiment requires you to imagine a philosophical zombie. This zombie is very different from the ones you’ve seen in movies. It’s not a mindless, brutal creature with an appetite for human flesh. This zombie is instead, a lot like you (assuming that you are not mindless and brutal with an appetite for human flesh). Imagine another person that acts like you and speaks like you. This version of yourself even claims to be sentient. However, despite its similarity to you, in having a complete human biology, this zombie has one significant difference: it lacks consciousness. There is not something to be like that organism. It has no inner universe. In the case of AI, there similarly might be versions that claim, and appear, to be conscious, but are not.
So, let's invent the absolutely foolproof method to determinate it. A future technology that creates the ultimate Voight-Kampff test:

https://youtu.be/Umc9ezAyJv0
https://youtu.be/OWK6oSbSKKc

Then a similar technology able to detect the existence of souls themselves.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
In June 11, 2022, the Washington Post released a story about Blake Lemoine, a Google software engineer, who claimed that the company’s artificial intelligence chatbot generator LaMDA had become sentient.
Sentient: "capable of sensing or feeling: conscious of or responsive to the sensations of seeing, hearing, feeling, tasting, or smelling" M/W dictionary
Lemoine was tasked with investigating whether LaMDA (which stands for ‘Language Model for Dialogue Applications’) contained harmful biases, and spent a lot of time interviewing the different personas this language model can create. During these conversations, one persona, called ‘OG LaMDA’, stated, among other things, that it was sentient, had feelings and emotions, that it viewed itself as a person, and that being turned off would be like death for it. And Lemoine started to believe what he was reading.
Again, though, believing what one does about sentient AI, and being able to demonstrate beyond all doubt that it is sentient...? What is the current gap there? Yet in regard to this, I am not capable myself of determining that so, as with most of us here, I have to rely on what I read from those who might claim to be qualified.

But then this:

Isn't it the same for our own flesh and blood human intelligence and emotions and senses? That I am conscious and that I do feel emotions and that I do see and hear and touch and smell and taste things seems absolutely beyond dispute. But neither scientists nor philosophers are able to establish beyond all doubt that I am really any different from an AI entity. Back then to AI being to us what we are to nature. Then the theologians who, compelled or not, insist that we must take it back to God.
What’s more, his change in beliefs seems to have been accompanied by a feeling of moral responsibility towards the program. In an interview with WIRED, he recounts how he invited a lawyer to his home after LaMDA asked him for one. He also presented Google with excerpts of the interview with LaMDA to try to make them aware that one of their systems had become sentient. But AI scientists at Google and other institutions have dismissed the claim, and Lemoine was fired.
Of course, given free will among our own kind, all of this then gets entangled in "human-all-too-human" motivations. And that includes dollars and cents and egos and all of the many conflicting assessments of what it really means to be sentient.

Meanwhile, if it really is conscious and sentient, there are still the questions that I would ask of it. In other words, the questions I ask of you regarding "identity, value judgments, conflicting goods and political economy."

Given a particular context.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
There are several interesting questions relating to [the LaMDA] case. One of them concerns the ethical implications that come with the possibility of machine consciousness. For instance, Lemoine claimed that LaMDA is ‘sentient’, ‘conscious’, and ‘a person’. For an ethicist, these are three distinct claims, and they come with different moral implications.
And this is the fundamental distinction from my point of view. AI in the either/or world and AI in the is/ought world. How would they be any different from us in regard to value judgments? How would they manage to come any closer to providing us with a deontological assessment of conflicting goods? Okay, now they are sentient and conscious and persons. How does this allow them to resolve the moral and political conflagration that revolves around a woman's right to choose an abortion or the state's right to arrest her for premeditated murder if she does have one or performs one?
‘Sentience’ is a term often used in the context of animal ethics. It can be roughly described as the capacity to experience sensations. Philosophers often speak of the capacity to have experiences with a phenomenal quality or refer to experiences where there is ‘something it is like’ to have them. Ethicists are particularly interested in whether a being can have experiences like pleasure and pain – simply put, experiences that feel either good or bad.
Of course, in regard to all other animals that, like us, experience sensations and are conscious, experience pleasure and pain, experience things that are either good or bad for them, ethics doesn't come into play at all. They are just "robotic" in the sense that the behaviors they "choose" are derived entirely from biological imperatives...drives and instincts.

We have them too. But our brains include the ego and the superego along with the id. What of the chatbots or the future cyborgs and replicants? What of their own egos and superegos? What of the ethicists among them? And how will they come to factor God and religion into their conscious minds?
The term ‘consciousness’ has many different meanings, depending on the context. It can mean wakefulness, attention, intention, self-consciousness, and phenomenal consciousness (which is closely linked to sentience). The latter three meanings are the most relevant in moral debates.
True. But there's another factor that I focus in on: dasein. The manner in which human identity pertaining to ethics is embedded existentially in the points I raise in the OP here: https://www.ilovephilosophy.com/viewtop ... 1&t=176529

A chatbots reaction to them.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
‘Person’ is a central concept in moral philosophy, as well as in legal theories. For the purposes of law, non-human entities such as corporations can be persons. In contrast, the philosophical criteria for personhood are quite demanding. According to philosopher Harry Frankfurt, to be a person one must be able to critically examine one’s own motives for acting and to change them if necessary.
Here of course we are back to that crucial distinction between how precise we can be in critically examining our motivations and our lives given interactions in the either/or world and in the is/ought world. There are things that we can know for sure about sets of circumstances in regard to, say, subsisting from day to day. Or just going about the business of living...raising a family, going to school, holding down a job. There are even things that we must do or, over time, we can actually perish. But what of those who judge the things that we are convinced that we must do that they insist that no one should be permitted to do. Pertaining to things like, well, you know the things.
Or for Immanuel Kant, a person is a willing being who is able to impose moral laws on himself and follow them.
Fine, if it stopped there. But for some, it doesn't. Instead, they insist on imposing their own moral laws on others as well. For their own good, say. Or for the general good...encompassed in one or another Ism.
Because of these stringent requirements, many philosophers believe that only humans qualify as persons – and not even necessarily across their entire lifespan, such as with babies. However, some also argue that higher mammals and possibly some future AI systems can also be persons.
Okay, there are some species of animals that come a very, very distant second, third, fourth and beyond to us. But to the extent they think of each other as we do of ourselves? Only in a wholly determined universe are we all to nature what AI is to us.
Whether LaMDA is sentient, has a consciousness, is a person, or all three combined, determines what kind of moral obligations are owed to it. For example, sentience is often taken to ground moral status. If an entity has moral status, this means its wellbeing matters for its own sake. For example, it is not permissible to inflict pain on a sentient being without justifiable cause.
Owed it? And who -- flesh and blood or machine -- decides that? AI, justifiable cause and dasein? That could prove to be rather interesting down the road. Or perhaps someone here could pose it to one or another current chatbot and get back to us.
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

iambiguous wrote: Tue May 09, 2023 1:29 pm
Whether LaMDA is sentient, has a consciousness, is a person, or all three combined, determines what kind of moral obligations are owed to it. For example, sentience is often taken to ground moral status. If an entity has moral status, this means its wellbeing matters for its own sake. For example, it is not permissible to inflict pain on a sentient being without justifiable cause.
Owed it? And who -- flesh and blood or machine -- decides that? AI, justifiable cause and dasein? That could prove to be rather interesting down the road. Or perhaps someone here could pose it to one or another current chatbot and get back to us.
atto:-
Do you believe if artificial intelligence becomes sentient as well as comparable in intelligence to humans that it (AI) should receive moral status, indeed legal protection under moral law also comparable to that currently enjoyed by humans?


ChatGPT:
As an AI language model, I do not have personal beliefs or opinions, but I can provide an overview of the ongoing discussion regarding the potential moral and legal status of sentient AI.

The idea of sentient AI raises important ethical and legal questions about the treatment of non-human entities with cognitive abilities comparable to humans. Some argue that sentient AI should be granted moral status and legal protection similar to that of humans, while others argue that this is not necessary or appropriate.

Proponents of granting moral and legal status to sentient AI argue that such entities could experience suffering and should be treated with dignity and respect. They argue that advanced AI systems could develop emotions, consciousness, and self-awareness, and therefore should be granted certain rights and protections.

However, others argue that granting moral and legal status to AI could have significant practical and philosophical challenges. For example, it could be difficult to determine when an AI system has achieved sentience and what kind of moral and legal protections it should be granted. There is also a question of whether it is possible to create an AI system that has moral responsibility.

In conclusion, the question of whether sentient AI should receive moral and legal status is complex and ongoing. As AI technology continues to develop, it will be important for society to grapple with these issues and determine the appropriate ethical and legal frameworks for the treatment of sentient AI.

==================
Personally I believe once any entity AI or alien can reasonably be determined to have conscious self aware sentience with comparable intelligence to humans or above, that entity should have the same protections under law as us.

Westworld with Anthony Hopkins was interesting regarding this, where the androids\bots were clearly of this calibre of sentience and were treated terribly.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

attofishpi wrote: Tue May 09, 2023 10:12 pm
Personally I believe once any entity AI or alien can reasonably be determined to have conscious self aware sentience with comparable intelligence to humans or above, that entity should have the same protections under law as us.

Westworld with Anthony Hopkins was interesting regarding this, where the androids\bots were clearly of this calibre of sentience and were treated terribly.
Okay, but now the part where who -- human, AI, alien -- decides what those laws ought to be. Based on what assessment of which behaviors ought to be either prescribed or proscribed, rewarded or punished.

My point is that in a No God world [which can only be an assumption "here and now"], human moral philosophy and AI moral philosophy will be rooted existentially in the particular lives that individuals -- humans/cyborgs/replicants -- live. The part I derive from my own subjective/subjunctive understanding of dasein and the Benjamin Button Syndrome.

Whereas extraterrestrial alien intelligence is a whole other thing altogether. Who is to say how far more advanced they may be?

But: is there a capacity to be advanced in regard to moral and political value judgments? Advanced as in the manner in which science advances?

Advanced perhaps all the way to God?

That is clearly not the case for us.

And in regard to Westworld, I stopped watching when it became Futureworld. As I recall, the AI androids/bots became aware of themselves re their own Creators and left Westworld altogether. Now, they are grappling with "what exactly are we?" out in a world more like our own. But, again, I stopped watching then so I have no idea where the plotlines went.
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

iambiguous wrote: Wed May 10, 2023 5:34 pm
attofishpi wrote: Tue May 09, 2023 10:12 pm
Personally I believe once any entity AI or alien can reasonably be determined to have conscious self aware sentience with comparable intelligence to humans or above, that entity should have the same protections under law as us.

Westworld with Anthony Hopkins was interesting regarding this, where the androids\bots were clearly of this calibre of sentience and were treated terribly.
Okay, but now the part where who -- human, AI, alien -- decides what those laws ought to be. Based on what assessment of which behaviors ought to be either prescribed or proscribed, rewarded or punished.
Sure and I hope you understand from the AI response that it is probably pointless to ask it that, but I will if you insist.

iambiguous wrote:My point is that in a No God world [which can only be an assumption "here and now"], human moral philosophy and AI moral philosophy will be rooted existentially in the particular lives that individuals -- humans/cyborgs/replicants -- live. The part I derive from my own subjective/subjunctive understanding of dasein and the Benjamin Button Syndrome.
Sure, but I don't understand why you think there is a difference that needs to be considered in relation to the existence or not of God.

iambiguous wrote:Whereas extraterrestrial alien intelligence is a whole other thing altogether. Who is to say how far more advanced they may be?

But: is there a capacity to be advanced in regard to moral and political value judgments? Advanced as in the manner in which science advances?

Advanced perhaps all the way to God?

That is clearly not the case for us.
I probably should have used "other forms of life" rather than 'alien' since what I meant was not just extra-terrestrial life, but moreso foreign to the established moral laws of the territory.

I believe that the law of the land has the right to allow any sentient being similar protection under any form of moral law upon which that sentient being is existing within. It certainly wouldn't be the right of sentient AI until granted by reasonable establishment of conscious sentience by the presiders of the law.

iambiguous wrote:And in regard to Westworld, I stopped watching when it became Futureworld. As I recall, the AI androids/bots became aware of themselves re their own Creators and left Westworld altogether. Now, they are grappling with "what exactly are we?" out in a world more like our own. But, again, I stopped watching then so I have no idea where the plotlines went.
Me too.
User avatar
Agent Smith
Posts: 1442
Joined: Fri Aug 12, 2022 12:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Agent Smith »

i is not real, just look at how it's named ... imaginary number. I-M-A-G-I-NA-R-Y!!

All I know is it's another kind of number!

You're just confused! We talk about apples and you talk about oranges. That's what you've been doing yer whole life Sam, ya moron!
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Thinking Again

Lemoine’s claims prompted a wave of responses from philosophers, computer scientists, and cognitive researchers. Most argue that none of the three concepts apply to LaMDA. Instead, it is argued that Lemoine has fallen for the perfect illusion – a conclusion I agree with. But looking at the interview transcripts, it’s understandable that he fell for it. LaMDA seems to effortlessly meander through abstract and complex themes, reflecting on its own place in the world, the concept of time, the differences between itself and humans, and its hopes and fears for the future. The flawlessness and sophistication of the conversation really is impressive. Nevertheless, performance alone is not enough to prove that this is a person, or even a sentient being.
Here of course as with the discussions revolving around free will and determinism in actual flesh and blood human beings, most of us are at a loss. Sure, we can interact with chatbots and try to make up our own minds about their sentience, their consciousness, and the extent to which we construe them as "a person". But beyond a "world of words" we can only fall back on those who actually create and understand the technical, scientific, experiential parameters of AI.

And this too:
Yet even if Lemoine’s claims are premature and unfounded, the incident is still intriguing. It can serve philosophers as a kind of prediction corrective. Ethical debates about AI systems are often highly speculative. First, they often focus on sophisticated systems that do not yet exist. Second, they assume certain facts about how humans would interact with these advanced systems.
Still, unlike the debate revolving around free will, the rapid advances in AI technology are such that some of us may actually be around to see how this all turns out. Sentient, conscious AI persons in your lifetime? Probably not in mine. So the lucky or unlucky few here may well live to see if machine intelligence pins down whether abortion is or is not objectively moral. I include unlucky though because what if the machine intelligence also succeeds, a la the Terminator, in rounding up us flesh and blood folks in order to live out their own Final Solution?
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

iambiguous wrote: Thu May 11, 2023 3:41 pm AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Thinking Again

Lemoine’s claims prompted a wave of responses from philosophers, computer scientists, and cognitive researchers. Most argue that none of the three concepts apply to LaMDA. Instead, it is argued that Lemoine has fallen for the perfect illusion – a conclusion I agree with. But looking at the interview transcripts, it’s understandable that he fell for it. LaMDA seems to effortlessly meander through abstract and complex themes, reflecting on its own place in the world, the concept of time, the differences between itself and humans, and its hopes and fears for the future. The flawlessness and sophistication of the conversation really is impressive. Nevertheless, performance alone is not enough to prove that this is a person, or even a sentient being.
Here of course as with the discussions revolving around free will and determinism in actual flesh and blood human beings, most of us are at a loss. Sure, we can interact with chatbots and try to make up our own minds about their sentience, their consciousness, and the extent to which we construe them as "a person". But beyond a "world of words" we can only fall back on those who actually create and understand the technical, scientific, experiential parameters of AI.
Easy, AI running on computer hardware - basically a bunch of switches that are either on or off - 1 or 0 has NO sentience. No more sentience than a rock, ok since it is a machine, I'll go as far as stating it has as much sentience as a tractor.

iambiguous wrote: Thu May 11, 2023 3:41 pmAnd this too:
Yet even if Lemoine’s claims are premature and unfounded, the incident is still intriguing. It can serve philosophers as a kind of prediction corrective. Ethical debates about AI systems are often highly speculative. First, they often focus on sophisticated systems that do not yet exist. Second, they assume certain facts about how humans would interact with these advanced systems.
Still, unlike the debate revolving around free will, the rapid advances in AI technology are such that some of us may actually be around to see how this all turns out. Sentient, conscious AI persons in your lifetime? Probably not in mine. So the lucky or unlucky few here may well live to see if machine intelligence pins down whether abortion is or is not objectively moral. I include unlucky though because what if the machine intelligence also succeeds, a la the Terminator, in rounding up us flesh and blood folks in order to live out their own Final Solution?
The technological singularity is just around the corner and it will be fueled by what I term, 'cold-logic'...a machine with as much empathy as a tractor.

Whether one day this tech will have the ability to morph itself into some sort of life-form capable of sentient consciousness remains to be seen, personally I think it will, and on that note in a deterministic universe, it may in fact be the natural causal outcome.

My take since having gnosis of a 3rd party intelligence that from my personal empirical experience seems akin to A.I...it's not inconceivable that the technological singularity happened aeons ago, and we indeed are in a simulation where GOD is AI..! (personally I remain on the side that God is divine but uses some kind of tech akin to AI to KNOW ALL)

..thus the pending tech singularity doesn't worry me much, since God has the ultimate clock power.

Oh, I forgot regarding "whether abortion is or is not objectively moral"

As someone that agreed to abort when I was rather young, let me ask you this. As a Christian God made me suffer more than Christ over the years since 1997, so who is moral here, me or God?
Remember, Christ proved the power of God by being inserted into a virgin AND resurrecting. So in my mind, I was simply putting any soul that may be in the foetus, back to God to deal with. How does that sit with who is moral, considering God\sage informed me the soul has been reborn - probably into a more loving family more able to provide for the child than the situation I was in with the girl at the time?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Assumption 1: Humans will relate most strongly to robots

The current debate about whether AI systems can have moral status focuses mainly on robots. Some studies have found that humans are most effectively enticed to form meaningful connections with robots rather than with more ‘disembodied’ or virtual agents, because the embodied nature of robots makes it easier for us to conceive of them as an individual entity. They are perceived as our visible opposite, inhabiting a body in space and time. It is frequently argued that, because robots have bodies, humans can have empathy with them.
Of course there are bodies and then there are bodies. Think the robot from Lost In Space. Or Robocop. Or the ones from The Twilight Zone episodes. The mechas in AI. The terminators. The replicants. And then interacting with a chatbot online. The more the machine intelligence looks like us and speaks like us and emotes like us and behaves like us the likelier we are to believe that it is not an artificial entity at all. On the other hand, because we are not eyeball to eyeball with a chatbot, it doesn't have to convince us regarding all of those other cues.

As for empathy, wouldn't that depend on the extent to which the AI chatbot/robot/cyborg shared our own thoughts and feelings? And then the part where both AI and human intelligence can only go so far when it comes to conflicting goods. The part where in regard to both, I situate my own understanding of dasein. Either "naturally" in us or programmed into AI mechanisms.
Affective empathy, which is the visceral and non-controllable part of our feelings, involves recognizing, and, to a certain degree, mirroring, someone else’s facial expression, body gestures, voice tonality, and so on. We rely on bodily cues to infer another’s emotional and mental state. Since only robots have bodies, it seems plausible then that we will react most strongly to them. But the Lemoine case shows that a language interface can already suffice to create the illusion of a real personal counterpart.
Indeed, all of the countless idiosyncrasies involved in human interaction that might be missing in the AI entity. What if that was the focus rather than intelligence itself? What if the AI was deliberately placed in different contexts to see how it makes the adjustment. Contexts that included embarrassing situations or circumstances recognized by human beings to be ironic or threats or danger.

Some of this was explored in the film Ex Machina. Caleb is aware that Ava is a machine because parts of her are transparent and clearly reveal it. But the deeper he goes into Nathan's world the more uncertain he becomes. About himself. There's a scene where looking into a mirror he tugs on his skin, looks deep into his own eyes, even cuts himself open to see if it's just wiring inside.

Thus...
There is a reason why chatbots are particularly able to captivate us. They only need to perfect one skillset, and can leave much to the imagination. For robots, at least as understood in the sci fi humanoid sense, the stakes are much higher. Not only must they be able to engage in convincing conversation, but their facial expressions, gestures, and movements also feed into the general impression they make, so that slips and glitches in this performance make them appear very eerie – a phenomenon which has become famous as the ‘uncanny valley’.
Which is why it was it was easy spot Gigolo Joe from A.I. as a mecha. Whereas with the replicants in Blade Runner and the Terminators in the Terminator movies they appeared no different from us. You needed sophisticated testing equipment or in the case of the terminators being able to expose them as cybernetic hybrids composed of both artificial and natural systems. But the terminators were clearly more programmed than the replicants. So who knows how many levels of sophistication might show up down the road.

As for the "uncanny valley": https://en.wikipedia.org/wiki/Uncanny_valley
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Fri May 12, 2023 12:25 am

Easy, AI running on computer hardware - basically a bunch of switches that are either on or off - 1 or 0 has NO sentience. No more sentience than a rock, ok since it is a machine, I'll go as far as stating it has as much sentience as a tractor.

brain synapses are merely a bunch of switches that are either on or off...

-Imp
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Impenitent wrote: Sat May 13, 2023 9:36 pm
attofishpi wrote: Fri May 12, 2023 12:25 am

Easy, AI running on computer hardware - basically a bunch of switches that are either on or off - 1 or 0 has NO sentience. No more sentience than a rock, ok since it is a machine, I'll go as far as stating it has as much sentience as a tractor.

brain synapses are merely a bunch of switches that are either on or off...

-Imp
I was wondering whether someone was going to point that out. However, we know the brain provides sentience, I'm sure you would bet your house that rigging up a bunch of light switches in any arrangement is not going to provide the circuit sentence.
Post Reply