Mind and Artificial Intelligence: A Dialogue

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Impenitent wrote: your output has provided several digital representations of qualia... just as AI would produce...
These are NOT digital representations of QUALIA in any way. They are digital representations of the AI's environment, nothing here is proving your android has more sentience than a tractor!!

Impenitent wrote: Tue May 16, 2023 4:28 pmhow hot is it?
00110011 00110010 01000011 (It's 32C)

Impenitent wrote: Tue May 16, 2023 4:28 pmlook at a thermometer...
It wouldn't need to.

Impenitent wrote:is it on fire?
00110001 (True) (**The AI android has infrared camera also, it DETECTED a lot of heat)

Impenitent wrote:can you hear the smoke alarm?
00110000 (False) (** The AI android is too far away to DETECT any smoke alarm)[/quote]

NONE of the above represents QUALIA they represent measurements in digital form.

They are not even close to representing qualia.


Impenitent wrote:
atto wrote:If I take a human hand and a robot hand then hit them hard with a hammer it is the human that experiences QUALIA. The measurement of pressure upon the hand of the robot is not representing that QUALIA!! There is a huge difference between representing the environment with switches and qualia.
the human may feel pain, but both targets will be measurably affected...

-Imp
Yes one has QUALIA (pain) the other has a measurement of pressure.

Do you understand what QUALIA is?
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Thu May 18, 2023 4:48 am
Impenitent wrote: your output has provided several digital representations of qualia... just as AI would produce...
These are NOT digital representations of QUALIA in any way. They are digital representations of the AI's environment, nothing here is proving your android has more sentience than a tractor!!

Impenitent wrote: Tue May 16, 2023 4:28 pmhow hot is it?
00110011 00110010 01000011 (It's 32C)

Impenitent wrote: Tue May 16, 2023 4:28 pmlook at a thermometer...
It wouldn't need to.

Impenitent wrote:is it on fire?
00110001 (True) (**The AI android has infrared camera also, it DETECTED a lot of heat)

Impenitent wrote:can you hear the smoke alarm?
00110000 (False) (** The AI android is too far away to DETECT any smoke alarm)
NONE of the above represents QUALIA they represent measurements in digital form.

They are not even close to representing qualia.


Impenitent wrote:
atto wrote:If I take a human hand and a robot hand then hit them hard with a hammer it is the human that experiences QUALIA. The measurement of pressure upon the hand of the robot is not representing that QUALIA!! There is a huge difference between representing the environment with switches and qualia.
the human may feel pain, but both targets will be measurably affected...

-Imp
Yes one has QUALIA (pain) the other has a measurement of pressure.

Do you understand what QUALIA is?


yes, I understand what sensed data is and how it can be expressed...

if you are claiming that only humans can experience qualia, I'd have to agree; however, if you claim that other entities cannot observe and measure data through sensory organs or devices, I'd say you are incorrect.

-Imp
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

atto wrote:
Impenitent wrote:
atto wrote:If I take a human hand and a robot hand then hit them hard with a hammer it is the human that experiences QUALIA. The measurement of pressure upon the hand of the robot is not representing that QUALIA!! There is a huge difference between representing the environment with switches and qualia.
the human may feel pain, but both targets will be measurably affected...

-Imp
Yes one has QUALIA (pain) the other has a measurement of pressure.

Impenitent wrote:
atto wrote:Do you understand what QUALIA is?
yes, I understand what sensed data is and how it can be expressed...

if you are claiming that only humans can experience qualia, I'd have to agree;
Hooray!!

Impenitent wrote:however, if you claim that other entities cannot observe and measure data through sensory organs or devices, I'd say you are incorrect.

Other entities (AI) can measure data that our sensory organs are sensing, sure.
-Imp

But just so we are clear, you now agree that your statement:- "your output has provided several digital representations of qualia..." is incorrect - that these digital representations merely represent the environment that we share with AI, they are not representations of QUALIA?
:)
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Fri May 19, 2023 12:45 am
atto wrote:
Impenitent wrote:
the human may feel pain, but both targets will be measurably affected...

-Imp
Yes one has QUALIA (pain) the other has a measurement of pressure.

Impenitent wrote:
atto wrote:Do you understand what QUALIA is?
yes, I understand what sensed data is and how it can be expressed...

if you are claiming that only humans can experience qualia, I'd have to agree;
Hooray!!

Impenitent wrote:however, if you claim that other entities cannot observe and measure data through sensory organs or devices, I'd say you are incorrect.

Other entities (AI) can measure data that our sensory organs are sensing, sure.
-Imp

But just so we are clear, you now agree that your statement:- "your output has provided several digital representations of qualia..." is incorrect - that these digital representations merely represent the environment that we share with AI, they are not representations of QUALIA?
:)
not representations of human sensed qualia, but they are representations of environmental factors...

how those representations are interpreted and communicated is another question...

-Imp
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Impenitent wrote: Fri May 19, 2023 12:50 am
attofishpi wrote: Fri May 19, 2023 12:45 am But just so we are clear, you now agree that your statement:- "your output has provided several digital representations of qualia..." is incorrect - that these digital representations merely represent the environment that we share with AI, they are not representations of QUALIA?
:)
not representations of human sensed qualia, but they are representations of environmental factors...
Yes, they are representations of environmental factors, but are in no way representations of QUALIA!!
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Lemoine has never looked at LaMDA’s code; that was not part of his assignment. But even if he had, it probably wouldn’t have made a difference. In one of his conversations with LaMDA, he explains why this is the case:

Lemoine: I can look into your programming and it’s not quite that easy [to tell whether you have emotions or not M.G.].

LaMDA: I’m curious, what are the obstacles to looking into my coding?

Lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it’s possible that some of those correspond to feelings that you’re experiencing, we don’t know how to find them.
Here, of course, I and many others are stopped dead in our tracks. Since we don't have either the education, the knowledge or the practical background in grappling with AI technically, we are simply not sophisticated enough to wage much of a challenge regarding things we're just not sure regarding. All we can do is to peruse articles like this one and note the aspects of AI that pertain to things that we are more knowledgeable regarding. Again, for me, this involves AI and the philosophical/practical arguments pertaining to objective morality in a No God world.

Okay, suppose the experts agree to establish that LaMDA is in some respects the equivalent of a human being in a free will world. So, I pose to LaMDA the same question I pose to flesh and blood human beings: "how ought one to live morally and rationally in a world awash in both conflicting goods and in contingency, chance and change?"

Given dasein and the Benjamin Button Syndrome and given a particular context of LaMDA's choosing.
In a certain sense, the opacity of neural networks acts as an equalizer between experts and laymen. Computer scientists still have a much better understanding of how these systems work in general, but even they may not be able to predict the behavior of a particular system in a specific situation. Therefore, they too are susceptible to falling under the spell of their own creation.
The part where the technology itself becomes so mind-boggling in its complexity that even those who invented it are unable to anticipate all of the many possible permutations. In fact, perhaps the moment when the creation becomes human is when it is able to point this out in and of itself. In the interim, those who did not invent it [or those like us] are all the more likely to get duped.
So, Turing’s prediction that experts would be the hardest people to convince of machine intelligence does not hold. In a way, he already contradicted it himself. After all, he was convinced of the possibility of machine intelligence, and imagined a machine ‘child’ that could be educated similarly to a human child.
Stay tuned. Sooner or later there is going to be a news story about AI such that maybe, just maybe, we will get closer and closer to actually finding out. Unless both experts and philosophers are still foolish enough to believe that what they do believe need be as far as it goes to make it true.

And then the part where a machine "child" is taught to be good and not evil...and then "he" "she" "it" bumps into someone like me.
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Lemoine has never looked at LaMDA’s code; that was not part of his assignment. But even if he had, it probably wouldn’t have made a difference. In one of his conversations with LaMDA, he explains why this is the case:

Lemoine: I can look into your programming and it’s not quite that easy [to tell whether you have emotions or not M.G.].

LaMDA: I’m curious, what are the obstacles to looking into my coding?

Lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it’s possible that some of those correspond to feelings that you’re experiencing, we don’t know how to find them.
Lemoine is a bloody fool. (the 1st of many)

I will point out that when given computer code that others have written, even code I have written over the years - it's perplexing to get ones head around - even with plenty of programmers leaving comments within the code to explain what each function, component is doing.

The MAIN point is - it doesn't matter about the CODE - that enables electronic switches to be configure in ANY way - such that their output can outperform humans in their intelligence - they AINT SENTIENT!! So when they start pleading not to pull the plug - it's bollocks!!

I find it astounding that an engineer can be fooled by the output that a bunch of electronic switches configured in a particular way can output. :cry: ...and to the point that he thinks the machine may have emotions!!
Ridiculous.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Assumption 4: Responses to AI systems will be consistent

In current debates in AI ethics, it is sometimes argued that it is impossible to prevent humans from having empathy with and forming relationships with robots.
Again, from my frame of mind, that's not the point. For all I know 10 years from now there may actually be alliances between flesh and blood human beings and robots/cyborgs. They are able to create and then sustain an empathetic bond on both sides of the abortion wars.

But if I'm still around then will I or will I not be able to make the same points I raise in the OP here: https://ilovephilosophy.com/viewtopic.php?f=1&t=175121

Or, instead, will AI intelligence actually succeed in "thinking up" the optimal moral argument here such that all unborn babies have a day of birth and no pregnant women is forced to give birth.

What might that argument be?
It is also frequently assumed that we are increasingly unable to distinguish robot behavior from human behavior. Since these systems push our innate social buttons, we can’t help but react the way we do, so to speak. On these views, it is thought inevitable that humans will eventually respond to AI systems in a uniform way, including them in the moral circle, as a number of philosophers already suggest.
Of course, the really, really hard determinists among us argue that there is no distinction at all between human intelligence and machine intelligence because intelligence itself is just another inherent manifestation of Mother Nature's laws of matter. Though, given a free will world, many refer to it as Mother Nature because this allows them to at least imagine something in the manner of a teleological component "behind" the universe itself.

Cue the pantheists?

As for that "moral circle", I'll need a context if and when that day comes and I'm still around.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Assumption 5: Debates about AI rights are a distraction

The Lemoine case has brought another ethical issue back to the table. According to some, debates about AI rights or robot rights are a distraction from the real and pressing ethical issues in the AI industry. They argue that in the face of problems such as discrimination by algorithms (e.g. against ethnicities or different accents), invasion of privacy, and exploitation of workers, there is no room for the ‘mental gymnastics’ of thinking about the moral status of AI.
In fact, this is an issue being raised in regard to the writer's strike pertaining to scripted programming. What if machine intelligence reaches the point where actual flesh and blood writers are no longer needed. Something straight out of Robert Almans The Player. A whole new element to explore.

Still, how could a machine write the sort of stuff that only a human being can experience? can think and feel? Yeah, [as some see it] the crap that is scripted for much of the programming on television seems within the reach of AI. But the stuff that goes down deep into the human psyche and involves levels of intellectual and emotional nuance, various shades of sophisticated irony, all manner of inside jokes, plot twists that almost no one sees coming...?

Then in regard to issues that revolve around class and race and ethnicity and gender and sexuality...issues in which conflicting goods abound...what of "machine intelligence" then? Can the cyborgs and beyond get pregnant? Will the cyborgs and beyond be created to reflect different racial or gender characteristics? Will some be impoverished while others are filthy rich?

What can AI entities speak of intelligently that goes beyond how they are programed to respond by human beings themselves...with all of their own rooted existentially in dasein moral and political prejudices?
...Timnit Gebru, a computer scientist, tech activist, and former Google employee, tweeted two days after the Washington Post article appeared: “Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good ‘AGI’ to save us while what they do is exploit), spent the whole weekend discussing sentience. Derailing mission accomplished."
Same thing. What is the mission that either human intelligence or AI intelligence is attempting to accomplish? What assumptions are being made about right and wrong, good and bad behavior. With respect to both the mission ends and the means employed to accomplish it.

In other words, the points I raise about the existential relationship between a sense of identity, value judgments, conflicting goods and political economy...in regard to human social, political and economic interactions.

How does AI ethics change that?
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

iambiguous wrote: Sat May 20, 2023 7:24 pm For all I know 10 years from now there may actually be alliances between flesh and blood human beings and robots/cyborgs. They are able to create and then sustain an empathetic bond on both sides of the abortion wars.

But if I'm still around then will I or will I not be able to make the same points I raise in the OP here: https://ilovephilosophy.com/viewtopic.php?f=1&t=175121

Or, instead, will AI intelligence actually succeed in "thinking up" the optimal moral argument here such that all unborn babies have a day of birth and no pregnant women is forced to give birth.

What might that argument be?
Hey, me and my robot mates can take the foetus from a woman that doesn't want to give birth, and gestate it in our little chamber to be born anyway..yay!

What is an 'optimal moral argument' when it comes to abortion? Indeed, why are you so fixated on it to the point that you think a super intelligent AI (with zero empathy - since that is afforded via biological sentient emotion) may have such an argument that will satisfy you? (even though you have shown you are satisfied that ALL should be born! - as your optimal moral argument suggestion)


iambiguous wrote:
It is also frequently assumed that we are increasingly unable to distinguish robot behavior from human behavior. Since these systems push our innate social buttons, we can’t help but react the way we do, so to speak. On these views, it is thought inevitable that humans will eventually respond to AI systems in a uniform way, including them in the moral circle, as a number of philosophers already suggest.
Of course, the really, really hard determinists among us argue that there is no distinction at all between human intelligence and machine intelligence
I don't think hard determinists are that stupid.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

attofishpi wrote: Mon May 22, 2023 6:22 am
iambiguous wrote: Sat May 20, 2023 7:24 pm For all I know 10 years from now there may actually be alliances between flesh and blood human beings and robots/cyborgs. They are able to create and then sustain an empathetic bond on both sides of the abortion wars.

But if I'm still around then will I or will I not be able to make the same points I raise in the OP here: https://ilovephilosophy.com/viewtopic.php?f=1&t=175121

Or, instead, will AI intelligence actually succeed in "thinking up" the optimal moral argument here such that all unborn babies have a day of birth and no pregnant women is forced to give birth.

What might that argument be?
Hey, me and my robot mates can take the foetus from a woman that doesn't want to give birth, and gestate it in our little chamber to be born anyway..yay!

What is an 'optimal moral argument' when it comes to abortion? Indeed, why are you so fixated on it to the point that you think a super intelligent AI (with zero empathy - since that is afforded via biological sentient emotion) may have such an argument that will satisfy you? (even though you have shown you are satisfied that ALL should be born! - as your optimal moral argument suggestion)


iambiguous wrote:
It is also frequently assumed that we are increasingly unable to distinguish robot behavior from human behavior. Since these systems push our innate social buttons, we can’t help but react the way we do, so to speak. On these views, it is thought inevitable that humans will eventually respond to AI systems in a uniform way, including them in the moral circle, as a number of philosophers already suggest.
Of course, the really, really hard determinists among us argue that there is no distinction at all between human intelligence and machine intelligence
I don't think hard determinists are that stupid.
Pick one:

https://www.google.com/search?q=hitting ... =625&dpr=1
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

iambiguous wrote: Mon May 22, 2023 8:01 am
attofishpi wrote: Mon May 22, 2023 6:22 am
iambiguous wrote: Sat May 20, 2023 7:24 pm For all I know 10 years from now there may actually be alliances between flesh and blood human beings and robots/cyborgs. They are able to create and then sustain an empathetic bond on both sides of the abortion wars.

But if I'm still around then will I or will I not be able to make the same points I raise in the OP here: https://ilovephilosophy.com/viewtopic.php?f=1&t=175121

Or, instead, will AI intelligence actually succeed in "thinking up" the optimal moral argument here such that all unborn babies have a day of birth and no pregnant women is forced to give birth.

What might that argument be?
Hey, me and my robot mates can take the foetus from a woman that doesn't want to give birth, and gestate it in our little chamber to be born anyway..yay!

What is an 'optimal moral argument' when it comes to abortion? Indeed, why are you so fixated on it to the point that you think a super intelligent AI (with zero empathy - since that is afforded via biological sentient emotion) may have such an argument that will satisfy you? (even though you have shown you are satisfied that ALL should be born! - as your optimal moral argument suggestion)


iambiguous wrote:
Of course, the really, really hard determinists among us argue that there is no distinction at all between human intelligence and machine intelligence
I don't think hard determinists are that stupid.
Pick one:

https://www.google.com/search?q=hitting ... =625&dpr=1
That link just has images of people banging their heads against walls.
Did you have your head smashed into a wall by the smart kids at school and now you are so simple in the head that you find interacting intelligently with people on forums impossible (*hence why you just post your drivel at us but don't like being questioned?)
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

attofishpi wrote: Mon May 22, 2023 8:18 am
iambiguous wrote: Mon May 22, 2023 8:01 am
attofishpi wrote: Mon May 22, 2023 6:22 am

Hey, me and my robot mates can take the foetus from a woman that doesn't want to give birth, and gestate it in our little chamber to be born anyway..yay!

What is an 'optimal moral argument' when it comes to abortion? Indeed, why are you so fixated on it to the point that you think a super intelligent AI (with zero empathy - since that is afforded via biological sentient emotion) may have such an argument that will satisfy you? (even though you have shown you are satisfied that ALL should be born! - as your optimal moral argument suggestion)

I don't think hard determinists are that stupid.
Pick one:

https://www.google.com/search?q=hitting ... =625&dpr=1
That link just has images of people banging their heads against walls.
Did you have your head smashed into a wall by the smart kids at school and now you are so simple in the head that you find interacting intelligently with people on forums impossible (*hence why you just post your drivel at us but don't like being questioned?)
Note to the chatbots:

Pick one:

:roll: :shock: 8) :lol: :wink:

No, seriously.
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

OMG! It's an emoti attack from the kid that got his head smashed against a wall (he definitely deserved it).
iambiguous wrote:...will AI intelligence actually succeed in "thinking up" the optimal moral argument here such that all unborn babies have a day of birth and no pregnant women is forced to give birth.
How on this Earth do you propose that EVERY foetus is born, and ALL pregnant women that don't want to give birth allow their foetus to be born anyway?

Come on mr arrogant - how could such a scenario playout in our world?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What’s Stopping Us Achieving Artificial General Intelligence?
A. Efimov, D. Dubrovsky, and F. Matveev explore how the development of AI is limited by the perceived need to understand language and be embodied.
Over seventy years ago, Alan Turing developed the simple but powerful idea that any solvable mathematical problem can in principle be solved with a ‘universal computing device’. The type of device he described in his 1936 paper became known to researchers as a ‘Turing machine’. Ever since, we have been trying to create artificial intelligence by programming electronic machines. Most of the current research in the field of AI is indeed just an acceleration of that first universal Turing machine. Turing is also responsible for another fundamental idea that has shaped research in this area. The Turing test makes us ask: if we cannot distinguish whether we are holding a dialogue with a person or a machine, then does it really matter what is in front of us – a machine or a human – since we’re dealing with intelligence anyway?
Since my own main focus is less on what intelligence is and more on what the limitations of whatever intelligence is thought to be are, it really wouldn't make much different to me. That's why I am most curious about attempts by machine intelligence to tackle conflicting goods. Is there an intelligence beyond human intelligence capable of concocting "on the ground" rules for human interactions such that it masters morality as it has mastered, say, chess?
The Merriam-Webster Dictionary defines intelligence as ‘the ability to learn or understand or to deal with new or trying situations’.
Being a dictionary, of course, the definition will tend to be a "general description intellectual contraption". Intelligence by definition. On the other hand, the ability to learn or understand or to deal with what new or trying situations? Us vs. them in all of the at times truly complex existential contexts that we human beings can find ourselves in. What will be the machine equivalent of the id, the ego and the superego out in the real world of human interactions as we know them? As we know them "body, heart and soul" being biological entities at the tail end of millions of years of evolution and not in the world that chatbots today "grasp".
Turing’s idea of using language as a tool for comparing machine and human intelligence, based on how well a machine can pretend to be human, is both simple and profound. Thanks to this idea, such wonderful things as voice assistants and online translators have come to life.
I'll remember that the next time I am "interacting" with that mechanical "voice assistant" at the other end of the line when I call BGE. I wonder what "she" thinks about me.
Post Reply