Mind and Artificial Intelligence: A Dialogue

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Sun May 14, 2023 12:38 am
Impenitent wrote: Sat May 13, 2023 9:36 pm
attofishpi wrote: Fri May 12, 2023 12:25 am

Easy, AI running on computer hardware - basically a bunch of switches that are either on or off - 1 or 0 has NO sentience. No more sentience than a rock, ok since it is a machine, I'll go as far as stating it has as much sentience as a tractor.

brain synapses are merely a bunch of switches that are either on or off...

-Imp
I was wondering whether someone was going to point that out. However, we know the brain provides sentience, I'm sure you would bet your house that rigging up a bunch of light switches in any arrangement is not going to provide the circuit sentence.
we believe the brain may provide sentience... it may be in the pineal gland...

sentience (derived from a soul) may be totally separate from the physical body, although lobotomies do appear to work...

rigging up enough mechanical switches may allow the collective series to achieve consciousness as the biological switches do...

-Imp
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

omg :roll:

You could rig up a trillion trillion trillion electronic switches and you will NOT have consciousness.
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Sun May 14, 2023 3:59 am omg :roll:

You could rig up a trillion trillion trillion electronic switches and you will NOT have consciousness.
agreed- not biological consciousness...

let me know when you've counted half a trillion...

-Imp
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Impenitent wrote: Sun May 14, 2023 4:53 am
attofishpi wrote: Sun May 14, 2023 3:59 am omg :roll:

You could rig up a trillion trillion trillion electronic switches and you will NOT have consciousness.
agreed- not biological consciousness...

let me know when you've counted half a trillion...

-Imp
But you think the circuit would have consciousness because of the sheer number of the switches?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Assumption 2: We will embrace the idea of the thinking machine
And then those who embrace the idea that human beings are themselves just thinking machines entirely programmed by nature and its immutable laws.

But: in a No God world how does one wrap their head around that? It is completely mind-boggling. Why? Because it encompasses grasping how on Earth matter given the Big Bang as the starting point was able to evolve over billions of years into the biological self-conscious matter that we are.

The teleological part. Why would nature do this?

With God the answer is Divine. But matter like us in a No God universe? How is one thing, why another thing altogether.
In 1950, in the article ‘Computing Machinery and Intelligence’, Alan Turing described a computer-imitates-human game which became known as the Turing Test. The test was intended to provide a way of settling the question whether a machine could think. In this game, a human interrogator plays an unrestricted question-and-answer game with two participants, A and B. One of these two participants is a computer. Roughly speaking, the computer is considered intelligent if the interrogator judges the computer to be human at a certain probability.
Just out of curiosity, has a chatbot ever been tested to see if it could differentiate another chatbot from a human being? Or two or more chatbots in a discussion together? How would that be different from discussions with us?

Or how about chatbots programmed to "think" of themselves as either male or female, black or white, gay of straight? Or one programmed to be a Marxist or a fascist or an anarchist...confronting another chatbot with a completely different ideological bent.
Turing was aware that many of his contemporaries would hesitate to attribute intelligence to a machine, some because of beliefs in a soul that could only reside in a human, others due to the prejudice that a machine could never have the capabilities that make intelligence possible. Therefore, the conversations in the Imitation Game should be conducted via a teleprompter, i.e., a linguistic interface. People would type their responses in not knowing what was on the other end of the line. Thereby, an environment is created in which only the ‘intellectual’ capabilities of the respondent were put to the test.
As though intellect itself was the crucial factor in differentiating us from them. That's why it seems only when chatbots become the equivalent of humanoids or androids or cyborgs or replicants will things start to get particularly interesting.
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Sun May 14, 2023 5:06 am
Impenitent wrote: Sun May 14, 2023 4:53 am
attofishpi wrote: Sun May 14, 2023 3:59 am omg :roll:

You could rig up a trillion trillion trillion electronic switches and you will NOT have consciousness.
agreed- not biological consciousness...

let me know when you've counted half a trillion...

-Imp
But you think the circuit would have consciousness because of the sheer number of the switches?
it could be that the ghost is only in biological machines...

ghosts may inhabit other machines...

-Imp
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Impenitent wrote: Mon May 15, 2023 1:29 am
attofishpi wrote: Sun May 14, 2023 5:06 am
Impenitent wrote: Sun May 14, 2023 4:53 am

agreed- not biological consciousness...

let me know when you've counted half a trillion...

-Imp
But you think the circuit would have consciousness because of the sheer number of the switches?
it could be that the ghost is only in biological machines...

ghosts may inhabit other machines...

-Imp

There is no more sentience in Artificial General Intelligence than a washing machine, a tractor and a rock. Simply having more switches that are programmable to the point where the machine can interact in many ways perfectly mimicking humans does not make it sentient in any way and unfortunately It's going to be a common mistake that a lot of people now and into the future make in thinking it does.
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

perfectly mimicking...

how much sentience is required for intelligence?

when the program teaches itself is it demonstrating intelligence?

feedback loops abound...

-Imp
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Impenitent wrote: Tue May 16, 2023 12:52 am perfectly mimicking...

how much sentience is required for intelligence?

when the program teaches itself is it demonstrating intelligence?

feedback loops abound...

-Imp
Sentience is obviously required for our intelligence as biological life requires qualia input to develop intelligence.

Aritificial Intelligence requires NO sentience as its inputs are digital representations (via those electronic switches) of its environment.
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Tue May 16, 2023 3:12 am
Impenitent wrote: Tue May 16, 2023 12:52 am perfectly mimicking...

how much sentience is required for intelligence?

when the program teaches itself is it demonstrating intelligence?

feedback loops abound...

-Imp
Sentience is obviously required for our intelligence as biological life requires qualia input to develop intelligence.

Aritificial Intelligence requires NO sentience as its inputs are digital representations (via those electronic switches) of its environment.
digital representations of qualia input...

how hot is it? look at a thermometer...

is it on fire? can you hear the smoke alarm?

-Imp
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Assumption 3: The experts are the hardest to fool

Turing had a rather unusual understanding of the concept of ‘intelligence’. Not only did he believe that one does not need a biological brain to be intelligent – a view shared by many today – he also believed that whether or not something is intelligent is to some extent in the eye of the beholder. This is still a rather unusual position.
Okay, definitively, beyond any possible shadow of a doubt, let's at least attempt to pin this down philosophically in a world of words here. Your definitions and deductions against everyone else's. Then we take the consensus to the hard guys and gals and see if they can connect the dots that we use to the dots that they use.

Also, I'm back to imagining Turing himself exploring the "concept of intelligence" with an AI entity in regard to the actual flesh and blood parameters of homosexuality. Is it objectively rational or irrational? Is it objectively moral or immoral?
In ‘Intelligent Machinery’ (1948), he expresses the idea that whether a machine is viewed as being intelligent depends on the person who judges it. We see intelligence, he argues, in cases where we are unable to predict or explain behavior. Thus, the same machine may appear intelligent to one person, but not to someone else who understands how it works. For this reason, Turing believed that the interrogator in The Imitation Game should be an average human, and not a machine expert.
Does that compute? I may be misunderstanding his point but it sounds a lot like suggesting that in regard to the determinism/free will/compatibilism discussion and debate, we ought to leave it up to the average human instead of the scientists. Or even the philosophers.

Though in regard to intelligence we still need a context: Intelligent in regard to predicting or explaining what? Computers can calculate far faster and with a greater sophistication than we can in regard to any number of mathematical and scientific contexts. And if the programming is sophisticated enough it can "know" more facts about any number of subjects than we flesh and blood human beings. Making it's predictions and explanations preferable to most of us. Does that then make them more intelligent than we are? In what sense?
There is a bit of astonishment in the online community that a Google employee with a computer science degree – of all people! – would fall for the illusion of consciousness created by one of his company’s products. Why does he believe in LaMDA’s consciousness if he knows the technology behind it? Some have pointed to his spiritual orientation as an explanation: Lemoine is a mystic Christian. However, an important point is that the functioning of artificial neural networks is not easy to understand even for experts. Due to their complex architecture and non-symbolic mode of operation, they are difficult for humans to interpret in a definitive way.
And now this:
https://www.nytimes.com/2023/05/16/tech ... oning.html

Microsoft Says New A.I. Shows Signs of Human Reasoning
A provocative paper from researchers at Microsoft claims A.I. technology shows the ability to understand the way people do. Critics say those scientists are kidding themselves.


In any event, I'm the first to admit that I don't have either the education or the background to contribute to this discussion with any real degree of sophistication.

But for those that do, please get around to the part where AI chatbots and beyond, get around to exploring this:

"How ought we to behave morally/rationally in a world bursting at the seams with both conflicting goods and contingency, chance and change?"

The parts I root in dasein in my signature threads and in the Benjamin Button Syndrome pertaining to identity in the is/ought world.
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Impenitent wrote: Tue May 16, 2023 4:28 pm
attofishpi wrote: Tue May 16, 2023 3:12 am
Impenitent wrote: Tue May 16, 2023 12:52 am perfectly mimicking...

how much sentience is required for intelligence?

when the program teaches itself is it demonstrating intelligence?

feedback loops abound...

-Imp
Sentience is obviously required for our intelligence as biological life requires qualia input to develop intelligence.

Aritificial Intelligence requires NO sentience as its inputs are digital representations (via those electronic switches) of its environment.
digital representations of qualia input...
They are not digital representations of QUALIA in any way. They are digital representations of the AI's environment, nothing here is proving your android has more sentience than a tractor!!

Impenitent wrote: Tue May 16, 2023 4:28 pmhow hot is it?
00110011 00110010 01000011 (It's 32C)

Impenitent wrote: Tue May 16, 2023 4:28 pmlook at a thermometer...
It wouldn't need to.

Impenitent wrote:is it on fire?
00110001 (True) (**The AI android has infrared camera also, it DETECTED a lot of heat)

Impenitent wrote:can you hear the smoke alarm?
00110000 (False) (** The AI android is too far away to DETECT any smoke alarm)
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Thu May 18, 2023 1:58 am
Impenitent wrote: Tue May 16, 2023 4:28 pm
attofishpi wrote: Tue May 16, 2023 3:12 am

Sentience is obviously required for our intelligence as biological life requires qualia input to develop intelligence.

Aritificial Intelligence requires NO sentience as its inputs are digital representations (via those electronic switches) of its environment.
digital representations of qualia input...
They are not digital representations of QUALIA in any way. They are digital representations of the AI's environment, nothing here is proving your android has more sentience than a tractor!!

Impenitent wrote: Tue May 16, 2023 4:28 pmhow hot is it?
00110011 00110010 01000011 (It's 32C)

you expressed it as a digital representation of qualia... just as AI would do...

Impenitent wrote: Tue May 16, 2023 4:28 pmlook at a thermometer...
It wouldn't need to.

it wouldn't need to? but you did?
Impenitent wrote:is it on fire?
00110001 (True) (**The AI android has infrared camera also, it DETECTED a lot of heat)

again, you expressed it as a digital representation of qualia... just as AI would do...
Impenitent wrote:can you hear the smoke alarm?
00110000 (False) (** The AI android is too far away to DETECT any smoke alarm)

and again, you expressed it as a digital representation of qualia... just as AI would do...
your output has provided several digital representations of qualia... just as AI would produce...

-Imp
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

Impenitent wrote: Thu May 18, 2023 3:04 am
attofishpi wrote: Thu May 18, 2023 1:58 am
Impenitent wrote: Tue May 16, 2023 4:28 pm

digital representations of qualia input...
They are not digital representations of QUALIA in any way. They are digital representations of the AI's environment, nothing here is proving your android has more sentience than a tractor!!

Impenitent wrote: Tue May 16, 2023 4:28 pmhow hot is it?
00110011 00110010 01000011 (It's 32C)

you expressed it as a digital representation of qualia... just as AI would do...

Impenitent wrote: Tue May 16, 2023 4:28 pmlook at a thermometer...
It wouldn't need to.

it wouldn't need to? but you did?
Impenitent wrote:is it on fire?
00110001 (True) (**The AI android has infrared camera also, it DETECTED a lot of heat)

again, you expressed it as a digital representation of qualia... just as AI would do...
Impenitent wrote:can you hear the smoke alarm?
00110000 (False) (** The AI android is too far away to DETECT any smoke alarm)

and again, you expressed it as a digital representation of qualia... just as AI would do...
your output has provided several digital representations of qualia... just as AI would produce...

-Imp
They are not even close to representing qualia.

If I take a human hand and a robot hand then hit them hard with a hammer it is the human that experiences QUALIA. The measurement of pressure upon the hand of the robot is not representing that QUALIA!! There is a huge difference between representing the environment with switches and qualia.
Impenitent
Posts: 4305
Joined: Wed Feb 10, 2010 2:04 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Impenitent »

attofishpi wrote: Thu May 18, 2023 3:39 am
Impenitent wrote: Thu May 18, 2023 3:04 am
attofishpi wrote: Thu May 18, 2023 1:58 am

They are not digital representations of QUALIA in any way. They are digital representations of the AI's environment, nothing here is proving your android has more sentience than a tractor!!




00110011 00110010 01000011 (It's 32C)

you expressed it as a digital representation of qualia... just as AI would do...




It wouldn't need to.

it wouldn't need to? but you did?



00110001 (True) (**The AI android has infrared camera also, it DETECTED a lot of heat)

again, you expressed it as a digital representation of qualia... just as AI would do...



00110000 (False) (** The AI android is too far away to DETECT any smoke alarm)

and again, you expressed it as a digital representation of qualia... just as AI would do...
your output has provided several digital representations of qualia... just as AI would produce...

-Imp
They are not even close to representing qualia.

If I take a human hand and a robot hand then hit them hard with a hammer it is the human that experiences QUALIA. The measurement of pressure upon the hand of the robot is not representing that QUALIA!! There is a huge difference between representing the environment with switches and qualia.
the human may feel pain, but both targets will be measurably affected...

-Imp
Post Reply