Mind and Artificial Intelligence: A Dialogue

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

iambiguous wrote: Tue May 23, 2023 1:51 pm Since my own main focus is less on what intelligence is and more on what the limitations of whatever intelligence is thought to be are, it really wouldn't make much different to me. That's why I am most curious about attempts by machine intelligence to tackle conflicting goods. Is there an intelligence beyond human intelligence capable of concocting "on the ground" rules for human interactions such that it masters morality as it has mastered, say, chess?
Concocting the ground rules for us to interact?

How is a non sentient, an entity incapable of empathy beyond feigned simulation of it, going to master the very subjective area of morality and more to the point, then provide ground rules that us mere humans are going to then all agree to converse upon within the areas dictated as ground rules by an AI?

All you are providing is another moral conundrum (should we listen to an AI) sitting upon whatever moral area the AI is telling us it has figured out 'ground rules' - do you actually think the human populous are going to give rats arse what an artificial intelligence thinks should be the areas for discussion on any moral issue.

Such as your suggestion..
iambiguous wrote:...will AI intelligence actually succeed in "thinking up" the optimal moral argument here such that all unborn babies have a day of birth and no pregnant women is forced to give birth.
How on this Earth do you propose that EVERY foetus is born, and ALL pregnant women that don't want to give birth allow their foetus to be born anyway?

Feel free to attack me with emotis again, or provide a link of images to remind you of having your head banged against a wall as a child.
OR, you could provide a reasonable reply to your clearly unreasonable statements! :D
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

attofishpi wrote: Wed May 24, 2023 12:59 am
iambiguous wrote: Tue May 23, 2023 1:51 pm Since my own main focus is less on what intelligence is and more on what the limitations of whatever intelligence is thought to be are, it really wouldn't make much different to me. That's why I am most curious about attempts by machine intelligence to tackle conflicting goods. Is there an intelligence beyond human intelligence capable of concocting "on the ground" rules for human interactions such that it masters morality as it has mastered, say, chess?
Concocting the ground rules for us to interact?

How is a non sentient, an entity incapable of empathy beyond feigned simulation of it, going to master the very subjective area of morality and more to the point, then provide ground rules that us mere humans are going to then all agree to converse upon within the areas dictated as ground rules by an AI?

All you are providing is another moral conundrum (should we listen to an AI) sitting upon whatever moral area the AI is telling us it has figured out 'ground rules' - do you actually think the human populous are going to give rats arse what an artificial intelligence thinks should be the areas for discussion on any moral issue.

Such as your suggestion..
iambiguous wrote:...will AI intelligence actually succeed in "thinking up" the optimal moral argument here such that all unborn babies have a day of birth and no pregnant women is forced to give birth.
How on this Earth do you propose that EVERY foetus is born, and ALL pregnant women that don't want to give birth allow their foetus to be born anyway?

Feel free to attack me with emotis again, or provide a link of images to remind you of having your head banged against a wall as a child.
OR, you could provide a reasonable reply to your clearly unreasonable statements! :D
Note to others:

Again, as with Age, I almost never read anything that he or she posts. Still, if, for some peculiar reason, you do and actually come upon something that is either intelligible or [if possible] interesting, please bring it to my attention.


Now, pick one:
8)
8)
8)
User avatar
attofishpi
Posts: 9939
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Mind and Artificial Intelligence: A Dialogue

Post by attofishpi »

iambiguous wrote: Wed May 24, 2023 2:40 am Note to others:
Oh, that's right, you think you have a fanclub! You're worse than Peter Kropotkin, at least he doesn't take over threads and post the ridiculous crap like you have here...what amounts to continuous drivel of poorly thought out diarrhea.


Note to iambiguous's fanbase. Can anyone point out from the below what makes any of the points made by your esteemed iambiguous an intelligent way to use artificial intelligence where issues of morality are concerned?

iambiguous wrote: Tue May 23, 2023 1:51 pm Since my own main focus is less on what intelligence is and more on what the limitations of whatever intelligence is thought to be are, it really wouldn't make much different to me. That's why I am most curious about attempts by machine intelligence to tackle conflicting goods. Is there an intelligence beyond human intelligence capable of concocting "on the ground" rules for human interactions such that it masters morality as it has mastered, say, chess?
Concocting the ground rules for us to interact?

How is a non sentient, an entity incapable of empathy beyond feigned simulation of it, going to master the very subjective area of morality and more to the point, then provide ground rules that us mere humans are going to then all agree to converse upon within the areas dictated as ground rules by an AI?

All you are providing is another moral conundrum (should we listen to an AI) sitting upon whatever moral area the AI is telling us it has figured out 'ground rules' - do you actually think the human populous are going to give rats arse what an artificial intelligence thinks should be the areas for discussion on any moral issue.

Such as your suggestion..
iambiguous wrote:...will AI intelligence actually succeed in "thinking up" the optimal moral argument here such that all unborn babies have a day of birth and no pregnant women is forced to give birth.
How on this Earth do you propose that EVERY foetus is born, and ALL pregnant women that don't want to give birth allow their foetus to be born anyway?

(you are a f'ing irrational moron)
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What’s Stopping Us Achieving Artificial General Intelligence?
A. Efimov, D. Dubrovsky, and F. Matveev explore how the development of AI is limited by the perceived need to understand language and be embodied.
Modern developments are now getting close to the point when a single computer can tackle any problem, thus resembling a human being in the broadness of the application of its intelligence.
Bullshit?

Note any computer, chatbot, robot or AI entity able to tackle and then pin to the mat even a single moral or political conflict that besets us flesh and blood folks.

Or, for that matter, note any flesh and blood intelligence that has accomplished it.

How would posing such questions as...

* which taste better, chocolate or vanilla ice cream?
* is baseball a better sport than football?
* is music the greatest of all the arts?
* is abstract painting really art at all?
* what is the greatest film ever made?

...fare any better with the AI crowd?
This is called artificial general intelligence (AGI), which is also sometimes called ‘strong AI’. The idea is, the better and more accurate the means we employ to improve a program, the better it ‘understands’ our words, and the closer we approach artificial general intelligence.
Tell me this won't be inherently tricky. It's not for nothing that "understands" is used. And it's not all that far removed from scientists groping with the human brain in order to "understand" if the groping itself involves some measure of actual autonomy.

Then back to this: Strong AI at the existential intersection of identity, value judgments, conflicting goods and political economy.

And, sure, I'll let it choose or "choose" or "choose" the context.
But what if this basic assumption is wrong? What if it is not just language that determines the ‘generality’ or the ‘intelligence’ of an artificial agent? Is there a possibility that the signpost planted by Turing (and not only by him) seventy years ago is pointing in the wrong direction, and we should reconsider our route?
That's my point too. There's human intelligence. But there are also complex human emotions and psychological states. And the part that revolves around libido and instinct and drives. The AI equivalent of the "will to power"? Or what of those like Freud and Jung speculating on the subconscious and the unconscious human mind? The AI equivalent of that?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What’s Stopping Us Achieving Artificial General Intelligence?
A. Efimov, D. Dubrovsky, and F. Matveev explore how the development of AI is limited by the perceived need to understand language and be embodied.
An Old Idea about Language

The old idea is the suggestion from Alan Turing that if a machine imitates intelligence so well that a large percentage of humans conversing with it by text alone can’t tell it is a machine, then it possesses intelligence.
So, is that ridiculous or not? It would seem that it either possesses intelligence as it is understood that we possess intelligence [in a free will world] or it does not. That lots and lots of people think it possesses autonomous intelligence doesn't make it true. After all, lots and lots of people believe in astrology and numerology and tarot cards and New Age nostrums and God too.

Wouldn't there have to be a way in which to demonstrate it such that in fact it is true it has an intellectual and emotional and psychological self as we do?

On the other hand, the same might be true of us. Until it is demonstrated that we do in fact have free will then we may well be to nature what AI is to us.
In fact, in ‘Computing Machinery and Intelligence’ (Mind, 1950), Turing identified several areas as representing the ‘highest manifestations’ of human intelligence. His examples included the study of languages (and translations); games (chess, etc.); and, mathematics and cryptography (including solving riddles). If in these areas the output of a computer cannot be distinguished from that of a human then its level of thinking is equivalent to that of a human, and so we can say that we’re dealing with an intelligent machine. According to Turing, the high-level, intellectual functions of the human brain can be reproduced in a computer without the computer precisely imitating the functioning of the brain.
Ever and always the contexts revolve around ways in which intelligence can be measured. A language is either logical or not. A game is either won or lost. A mathematical equation is either understood or not. A riddle is either solved or not. A code is either broken or not.

But what of all the myriad emotional reactions that flesh and blood human beings can have when either accomplishing or failing to accomplish these things? Or is it all about intelligence itself? Things that can be measured.

Which is why my own main interest in AI still pertains to my own main interest in human beings: "I" at the existential intersection of identity, value judgments, conflicting goods and political economy.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What’s Stopping Us Achieving Artificial General Intelligence?
A. Efimov, D. Dubrovsky, and F. Matveev explore how the development of AI is limited by the perceived need to understand language and be embodied.
A Long-Running Debate about Language

In a prominent article, ‘Do Large Language Models Understand Us?’, Blaise Aguera y Arcas considered if the successful teaching of deaf-blind-mute children is evidence that verbal communication could be the basis for developing artificial intelligence without needing embodied intelligence.
This of course is difficult to grasp for most of us. In not being both deaf and blind what can we possibly understand about the reality of communicating in a world that we can neither see nor hear? Most of us come closest to grappling with it in watching films like The Miracle Worker.

An association had to be made between the three senses still available to Helen Keller as a child and a language that unfolded through the movement of fingers: https://youtu.be/lUV65sV8nu0

But how is this connected to learning and language and machine intelligence? I suppose things like chatbots are "hearing" and "seeing" the words used in communicating with them but not in the manner in which I imagine that we do as a human beings. Our "self-awareness" in a free will world is still embedded in the profound mystery of human consciousness itself.
Besides the well-known scientific and technical achievements of the USSR, such as a manned space flight and nuclear energy, Soviet propaganda announced that the USSR had developed its own effective teaching technique for deaf-blind-mute kids, in the so-called ‘Zagorsk experiment’. Here teachers showed not only that the students could form social skills, but that they could have a fulfilled intellectual life. During the experiment, four students of Zagorsk Boarding School For The Deaf-Blind entered the Faculty of Psychology of Moscow State University and successfully graduated. Two of them even defended dissertations.
Okay, but then back around to my own main interest here. Yes, down the road, AI really does become almost indistinguishable from us. And, okay, suppose they do go after us re The Terminator. Suppose they wipe us out. So, what do they put in our place in regard to behaviors that are either rewarded or punished? AI morality.

After all, however successful the USSR might have been with their own deaf and blind children, what do you suppose would have been communicated to them such that in achieving a "full intellectual life" it no doubt coincided with the moral and political dogmas of...the state?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can Machines Be Conscious?
Sebastian Sunday Grève and Yu Xiaoyue find an unexpected way in which the answer is ‘yes’.
Alan Turing thought that it was possible (at least in theory) to make machines that enjoyed strawberries and cream, that British summer favourite.
At least in theory. And, in theory, it may be possible that, down the road, a machine entity will become a member here. Then, once and for all, provide us with the definitive account of human intelligence...free or determined? And when I ask him/her/it, "how ought we to live morally and rationally in a world awash in both conflicting goods and contingency, chance and change?", he/she/it will provide me with the definitive assessment of that as well.

Did Mary abort Jane of her own free will? Was the abortion moral? At last, in theory, we will know.
From this we can infer that he also thought it was possible (again, at least in theory) to make machines that were conscious. For you cannot really enjoy strawberries and cream if you are not conscious – or can you?
Well, to the best of my knowledge, there's nothing about that in any Bible. So...so I guess we're on our own in pinning it down. And what do the brain scientists tell us about the enjoyment of strawberries and cream that human beings experience? What do they tell us about those who are allergic to them? Or about those who don't have access to them because they can't afford them? Those who are, in fact, starving to death in some Third World hellhole?

Did Turing spend much time contemplating those things? Or things like AI and homosexuality? Gay chatbots, cyborgs, replicants?
In any case, Turing was very explicit that he thought machines could be conscious. He did not, however, think it likely that such machines were going to be made any time soon. Not because he considered the task particularly difficult, but because he did not think it worth the effort: “Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic,” he wrote in his influential ‘Computing Machinery and Intelligence’.
All we can do, of course, is to imagine Turing's reaction to the world today. What might be deemed idiotic or not idiotic about the things able to be accomplished here and now. Hardly a day goes by without some new AI story in the news. Especially the stories aimed at...frightening us? All of the ominous things that might happen to us if some idiot decides to use AI to...scam us? control us? do away with us?
He added that even mentioning this likely inability to enjoy strawberries and cream may have struck his readers as frivolous. He explains:

“What is important about this disability is that it contributes to some of the other disabilities, e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man,”

thus reminding us, as he was wont to do, that humans have always found it difficult to accept some other individuals even within their own species as being of equal ability or worth. So he says that the importance of machines likely being unable to enjoy strawberries and cream resides in this being an example of a broader inability on the part of machines to share certain elements of human life.
Want to explore just how far this "one of us"/"one of them" mentality can be taken by our own species?

Start here: https://knowthyself.forumotion.net/t2021p420-race
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can Machines Be Conscious?
Sebastian Sunday Grève and Yu Xiaoyue find an unexpected way in which the answer is ‘yes’.
The first thing to do in answering the question is to specify what we mean by ‘machine’.
Indeed, what if it turns our that to nature we are just machines? What if what we mean by machines is merely what our brains -- the most mind-boggling machine of them all -- programed and then compelled us to mean by it?

As though the "human condition" itself is not embedded in the profound mystery of why there is any existence at all...and why this existence and not some other.
When Turing considered whether machines can think, he restricted ‘machines’ to mean digital computers – the same type of machine as the vast majority of our modern-day computing devices, from smartphones to supercomputers.
Not unlike the "good guys" machine that he was able to create in order to crack the "bad guys" enigma machine. Of course neither our machine nor their machine had a clue regarding what the codes pertained to. Nazis and fascism and democracy and the "final solution"? Moral and political reactions to them were never programed into the machines. Why? Because that was simply unable to be accomplished. That was the sort of thing that we human beings [nature's own machines or not] were able to contemplate and to respond to...personally.

Same thing with chatbots today. To what extent in sustaining exchanges with us do they really think and feel about what they are saying to us as we think and feel about what we are saying to them. The mystery of autonomy itself.

And the part where any intelligence at all is confronted with the arguments that "I" make in regard to conflicting goods in the is/ought world.

Thus...
At the time he was writing, around 1950, he had just helped to make such a machine a reality. Incidentally, he also provided the requisite mathematical groundwork for computers, in the form of what is now known as the Universal Turing Machine. So Turing still had a good deal of explaining to do, given the novelty of computers at the time. Today, most people are at least intuitively familiar with the basic powers of computing machinery, so we can save ourselves a detailed theoretical account.
Yes, how theoretically the future might unfold for us today in regard to AI. That is what is all the rage now "in the news" regarding machine intelligence. What "new" thing has been created and how "for all practical purposes" might it impact our actual day to day lives. For example, jobs that might no longer exist for actual human beings. The WGA strike and talk of AI screenwriters "down the road".

And that's what will really count. New advances in AI. Only this time it has an impact [ominous or otherwise] on your life.
In fact, we need not restrict what we mean by ‘machine’ to digital computers. As will be seen, the particular way of asking whether machines can be conscious that we present here only requires us to stipulate that the relevant engineering is not primarily of a biological nature.
Okay, true enough. But let's not stray too far from the actual differences as well. What [in a free will world] will still remain a crucial distinction between "us and them"?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can Machines Be Conscious?
Sebastian Sunday Grève and Yu Xiaoyue find an unexpected way in which the answer is ‘yes’.
By far the trickier part of asking whether machines can be conscious is to determine what one should take the word ‘conscious’ to mean.
Only here the conundrum is enhanced all the more because we are grappling with whether what we are conscious of [including consciousness itself] is something that we freely opt to be conscious of. All we know is that the world is bursting at the seams with matter. Both living, biological matter and all the other stuff that, when elements on the periodic table come together, produce matter that [presumably] is conscious of nothing at all.

Well, not counting the arguments of the panpsychists like gib over at ILP. Or the pantheists. The extent to which, as they believe, the universe itself, what, embodies an actual teleological component? Still...believing something is one thing, demonstrating it something altogether different. Especially in regard to something this momentous.

As for future manifestations of AI, what of their own attempt to provide existence with a meaning and a purpose?
To be sure, humans are intimately familiar with consciousness, insofar as an individual’s consciousness just is their subjective experience. On this common meaning of the term, consciousness is that special quality of what it is like to be in a particular mental state at a particular time. It is this same special quality that many people are inclined to think must be missing in even the most sophisticated robots.
Think for example Deep Blue beating Garry Kasparov or losing to him. Was there anything the equivalent of "the thrill of victory...and the agony of defeat" for "it"?

"Kasparov initially called Deep Blue an 'alien opponent' but later belittled it, stating that it was 'as intelligent as your alarm clock'. According to Martin Amis, two grandmasters who played Deep Blue agreed that it was 'like a wall coming at you'." wiki

One day this may or may not be resolved. Just as one day human consciousness itself may or may not be linked to determinism or free will. Or it will be disclosed [to those like me] how determinism and moral responsibility are compatible.
But the main difficulty in asking ‘Can machines be conscious?’ is that, despite our natural familiarity with consciousness, we are still ignorant of its fundamental nature. There is no widely agreed-upon theory of what consciousness is, and how we can tell when it is present. We certainly do not know how to build it from the ground up. The trick, as we shall see, is to circumvent this ignorance and make use of our basic familiarity instead.
Then [of course] the enigma that revolves around human consciousness itself. How do we determine whether this circumvention --as well as a familiarity with our own conscious minds -- are or are not just another manifestation of the only possible reality?

I can only imagine contact with some super-intelligent alien species that is able to demonstrate to us that, in fact, we do have free will.

Or, perhaps, as some are hoping, their God will reveal Himself and declare that, sure enough, free will is a manifestation of a soul that He Himself implanted in all of us.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

The Philosophy of The Terminator
Posted by Catholic Skywalker
METAPHYSICS

The universal question at the heart of the Terminator series is fate. Are we free to change the future or has it already been written and we are just playing out the part?
This part always comes into play whenever the subject revolves around time travel. And here both the flesh and blood human beings and the machine manufactured cyborgs are engaging in it.

But how to wrap your head around it. Sarah Connor in our present needs to be destroyed because in the future the machines have taken over. But in the future present John Connor has successfully led an army in challenging the machines. So the machines have sent the terminator back into the past in order to change our present so that their future is not successfully challenged. Only the human beings manage to accomplish the same thing in sending Kyle Reese back to our present in order to assure a human victory in a machine age future.

The whole thing is simply preposterous...right? Still, it allows us to imagine a future that now in the present is generating concerns that AI will indeed pose a significant danger to us. Some day. Just imagine a movie in which time travel never comes up but the machines really do seem on the verge of putting us flesh and blood folks out of business.

Then just up the surreal factor...
The first film takes a more determinist view of the world. Sarah Connor is the mother of John Connor. John sends Kyle Reese to the past so that he can become Sarah's lover and John's father. The act of time travel does not change history, it only makes it occur. Kyle has to become John's father or John can never send him back in time to begin with.
AI, time travel and determinism. And suppose the terminator had succeeded in taking out Sarah Conner. John isn't born and the machines vanquish the humans in the future. Or suppose Sarah meets someone other than Kyle who is so brilliant he is able to stop the machines before they even challenge the human species.

Think of it as the Benjamin Button Syndrome on steroids. A sequence of interactions that no one can even come close to grasping in its entirety.
That is not to say that free will plays no part. Sarah and Kyle freely choose to give into their passions and do what they need in order for John to survive. But they have to make that choice.
You know what's coming...

What if Sarah and Kyle are not free at all? Aside from being fictional characters created and then scripted by James Cameron and Gale Anne Hurd, what if Cameron and Hurd themselves were compelled by their brains to create the entire project?
CS Lewis once said that fate and free will can both exist at the same time, but we only really understand it when we experience it through things like Oedipus Rex, The Lord of the Rings, and in this case, Terminator.
Well, whatever the hell this means "for all practical purposes", Lewis brings it all back to God.

His God.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

The Philosophy of The Terminator
Posted by Catholic Skywalker
Kyle Reese: "Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

In one of the most harrowing descriptions, Kyle Reese sets up one of the important distinctions between the machines and the humans. This is emphasized when he tells Sarah (after she bites his hand), “Cyborgs don't feel pain. I do.”
Exactly my point! He is entirely programed by the machines to be anything but human. But: how are the machines themselves different from us? Or, to put it another way, are we ourselves just nature's machines?

That is what will be probed. Assuming that "somehow" mere mortals in a No God world did come to acquire at least some measure of autonomy, to what extent can we be reasoned with, bargained with? And why do some of us feel pity and remorse and fear about some things that others feel entirely the opposite regarding?

You all know my rendition.
Feeling appears to be an essential part of humanity. This is not always a good thing. The pain in Sarah's leg at the end of the first movie almost gets her killed. Her emotional trauma almost gets her to murder Miles Dyson. And humanity tends to give into its violent feelings. “It's in your nature to destroy yourselves,” the T-181 says to John. The first Terminator assumes a nuclear war would wipe most of us out.
Still, from my frame of mind, assuming free will, human emotions, like human thoughts, are rooted existentially in dasein. So, down the road when it gets harder and harder to make that crucial distinction between us and them, how will machine emotions play out? Will they become wrapped around Gods and ideologies and schools of philosophy as our own can be? Will there be AI entities as "fractured and fragmented" as "I" am in the is/ought world?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

The Philosophy of The Terminator
Posted by Catholic Skywalker
Now the war between the humans and the machines occurs because of the computer system known as Skynet, a military project designed to make our armed forces more efficient. But Skynet becomes self aware and triggers a nuclear war to wipe out most of humanity and then round up and kill the rest.
Of course here we are still unable to contemplate this other than in novels or in movies. We're still forced to make a leap of faith that "somehow" the machines became "self-aware". Just as "somehow" we did as flesh and blood biological matter given the evolution of life on Earth.
But Skynet is never looked at in language even remotely human. Unlike the Matrix programs or the 12 Cylon models, there is never any question as to the non-humanity of the robot army. That is because while Skynet appears to have free will and logic (though maybe not reason), it does not appear to have feeling.
And that's the beauty of creating an entirely fictional world. Skynet can be anything the screenwriter wants it to be. And those like you and I can imagine it being anyway that we want it to be. Still, that is the chasm between us and them, right? Yes, machines may develop the intelligence to outwit us. But with us that is almost always connected to feelings. Emotions intertwined in instincts and drives. All of which precipitating not only a conscious awareness of the world around us but subconscious and unconscious layers as well. Where are the current chatbots rendition of them?

Or imagine a planet of Vulcans. There intellect and reason and logic need be as far as AI goes? But of us Earthlings? What of a sense of machine identity on this planet?
At the very least it does not allow for feeling in its terminator army. Skynet is the real villain on 2 levels. First, it has come to the decision to genocide humanity, and so must be opposed. But second, they have enslaved their fellow machines. All of the Terminators are programed to follow orders. They have the capacity for free will and more, but Skynet does not allow this. It wants only nameless soldiers who obey without question.
The irony here being that among us mere mortals "here and now" objectivism is but another rendition of that. God or No God, millions around the globe attach their own egos to one or another authoritarian Gospel. And, in fact, some of those Gospels have as well led to genocide.

Still, you can't help but wonder whether "the machines" may or may not come to concoct one or another teleological component to their political agenda. Will there be an AI God or the equivalent of an AI political ideology?

And, of course, for the youngest among us, will they actually be around to find out?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Philosophy will be the key that unlocks artificial intelligence
David Deutsch
To state that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos would be uncontroversial.
On the contrary, given what we are still completely ignorant of regarding other possible intelligent life forms in the universe, the human brain may well be relatively primitive in its comprehension of scientific and philosophical truths.

And then [of course] the possibility of theological truths? The mind on God?

Though in the interim...
The brain is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists.
Here of course I demur. No, in my view, given a No God world and the reality of free will, the human brain is not able to understand the inherent/necessary difference between morally right and morally wrong behaviors. And however one connects the dots between nature and nurture. But certainly, in regard to the material interactions unfolding in the either/or world, the human brain clearly seems capable of grasping an objective reality. Taking into account the arguments of David Hume anyway.

In fact, that's my point here over and again. Accepting that "somehow" nature did evolve into matter actually capable of acquiring volition and "here and now" capable of inventing AI, will the machine intelligence itself succeed where we flesh and blood mortals have failed...in establishing demonstrable proof that objective morality is in fact the real deal in a No God world.

Given every context?
Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.
Again, however, since AI is often discussed solely in terms of "cerebral matters", it may well far surpass us in its accomplishments pertaining to things that require a rational and objective understanding of the world around us.

But human beings are more than that. It's the truly complex interaction between intellect, emotion, instincts and drives that makes us what we are. And, as of now, I can't even imagine a machine acquiring those components.

Can anyone here?
But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – the field of "artificial general intelligence" or AGI – has made no progress whatever during the entire six decades of its existence.
Now, this article was written more than 10 years ago. But, really, what has changed in the last decade?

Let alone how or why human brains came to exist at all.
Flannel Jesus
Posts: 2562
Joined: Mon Mar 28, 2022 7:09 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Flannel Jesus »

iambiguous wrote: Wed Jun 07, 2023 4:35 pm Philosophy will be the key that unlocks artificial intelligence
David Deutsch
To state that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos would be uncontroversial.
On the contrary, given what we are still completely ignorant of regarding other possible intelligent life forms in the universe, the human brain may well be relatively primitive in its comprehension of scientific and philosophical truths.
I've highlighted some things to draw a bit more of your attention to them. You apparently mistakenly believe you've suggested something contrary to what he said.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can a Machine Learn Morality?
Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.

Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn’t. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others. This time, Delphi said he should not.
Just imagine the questions that I might ask it.

Still, from my frame of mind, we are back to the "Trolly Problem":

viewtopic.php?p=631325&hilit=rick+coste#p631325

What on earth could a machine know about something like this? AI would have to advance to the point where one machine was interacting with other machines and, as with human beings, had formed emotional attachments to them. So that it would be other machines being sacrificed. After all, if it is human beings that are being sacrificed to save other human beings, what is at stake for machine intelligence? Unless, perhaps, it reaches the point where machines and human beings were able to form deep-seated personal relationships with each other in turn.

But how would machines not be faced as well with the same quandaries as we are in regard to ethical convictions in a No God world? Or, as with those like Harry Baird here, would an intelligent machine "just know" that any behavior that it found repugnant was by definition and by deduction immoral?
Morality, it seems, is as knotty for a machine as it is for humans.
And, even here, wouldn't the machines have to reach the point where their own intelligence was not [more or less] programmed by human beings who had, existentially, acquired their own subjective/subjunctive moral and political prejudices?

In other words...
Delphi, which has received more than three million visits over the past few weeks, is an effort to address what some see as a major problem in modern A.I. systems: They can be as flawed as the people who create them.
Then those like me who argue that, even presuming free will, an alleged flawed morality is no less embedded in one or another set of assumptions regarding right and wrong, good and evil behavior.

And that's where I ponder the possibility of AI, so much more sophisticated than us in regard to calculations in the either/or world, actually being able to invent or discover an objective morality.
Post Reply