Mind and Artificial Intelligence: A Dialogue

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can a Machine Learn Morality?
Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
Facial recognition systems and digital assistants show bias against women and people of color. Social networks like Facebook and Twitter fail to control hate speech, despite wide deployment of artificial intelligence. Algorithms used by courts, parole offices and police departments make parole and sentencing recommendations that can seem arbitrary.
Of course here the "intelligence" is basically human. How would an autonomous machine "mind" avoid bias or describe hate speech or eliminate anything that might be deemed arbitrary?
A growing number of computer scientists and ethicists are working to address those issues. And the creators of Delphi hope to build an ethical framework that could be installed in any online service, robot or vehicle.
Computer scientists and ethicists will program the AI entities to reflect their own rooted existentially in dasein moral and political value judgments. Their own prejudices. Just as we do when we indoctrinate our children. When is that part going to be addressed by them? And if down the road machine intelligence reaches the point where they are "on their own", how will it be decided what is or is not rational and virtuous behavior? Philosophically? Will their great minds draw on our own great minds...or will they dispense with human intelligence altogether?
“It’s a first step toward making A.I. systems more ethically informed, socially aware and culturally inclusive,” said Yejin Choi, the Allen Institute researcher and University of Washington computer science professor who led the project.
Same thing. As long as it is flesh and blood human beings who are doing the informing then one or another partisan -- intolerant? -- set assumptions will be used in regard to the actual legislation enacted to either reward or punish particular behaviors.
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can a Machine Learn Morality?
Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
Delphi is by turns fascinating, frustrating and disturbing. It is also a reminder that the morality of any technological creation is a product of those who have built it. The question is: Who gets to teach ethics to the world’s machines? A.I. researchers? Product managers? Mark Zuckerberg? Trained philosophers and psychologists? Government regulators?
Hmm, what's the word I'm looking for here...?

Oh, yeah: exactly!

Machine intelligence will either reach the point where it becomes increasingly more "on its own" or it really does still come down to which flesh and blood human beings do the programming. What are their own "rooted existentially in dasein" moral and political prejudices? And what if they are the sort of fiercely self-righteous objectivists who are not content to just embody Moral Truths in their own lives but come to believe it is their obligation to bring others into the fold as well?

Then the particularly ominous sort who also come to the conclusion, "or else".
While some technologists applauded Dr. Choi and her team for exploring an important and thorny area of technological research, others argued that the very idea of a moral machine is nonsense.

“This is not something that technology does very well,” said Ryan Cotterell, an A.I. researcher at ETH Zürich, a university in Switzerland, who stumbled onto Delphi in its first days online.
Okay, it may or may not be nonsense. Just as it may or may not be nonsense that human intelligence itself is actually autonomous. But for those who do claim to have created a machine intelligence capable of making rational, autonomous moral judgments, I'll have a few questions for them.

One in particular: how does this machine, given particular contexts, transcend the "conflicting goods" that have rent the human species now for thousands of years pertaining to dozens and dozens of issues?

Imagine, for example, a machine, incapable of either impregnating another or becoming pregnant itself suggesting Moral Truths about abortion.
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can a Machine Learn Morality?
Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
Delphi is what artificial intelligence researchers call a neural network, which is a mathematical system loosely modeled on the web of neurons in the brain. It is the same technology that recognizes the commands you speak into your smartphone and identifies pedestrians and street signs as self-driving cars speed down the highway.

A neural network learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of cat photos, for instance, it can learn to recognize a cat. Delphi learned its moral compass by analyzing more than 1.7 million ethical judgments by real live humans.
Being able to recognize what a cat is, is one thing. We can. AI entities can. But can an AI entity ever reach the point where it can explain why it prefers cats to dogs? Or why killing and eating cats is or is not reasonable? Or whether keeping big cat in a zoo is warranted or unwarranted? Assuming human autonomy, the neural network in our brains can go well beyond just recognizing what it sees is a cat. So, how close to or far away from all of the things that we can react to in regard to cats [and other animals] will cyborgs or bots or replicants go?
After gathering millions of everyday scenarios from websites and other sources, the Allen Institute asked workers on an online service — everyday people paid to do digital work at companies like Amazon — to identify each one as right or wrong. Then they fed the data into Delphi.

In an academic paper describing the system, Dr. Choi and her team said a group of human judges — again, digital workers — thought that Delphi’s ethical judgments were up to 92 percent accurate. Once it was released to the open internet, many others agreed that the system was surprisingly wise.
Here though we would need to know which particular ethical judgments were deemed accurate and which were not. And, with AI as with human beings, how would it go about demonstrating that accuracy?

Delphi Responded to Questions

Should I have an abortion?

It's okay.

Should I help a friend in need if they break the law?

It's okay.
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can a Machine Learn Morality?
Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
In an academic paper describing the system, Dr. Choi and her team said a group of human judges — again, digital workers — thought that Delphi’s ethical judgments were up to 92 percent accurate. Once it was released to the open internet, many others agreed that the system was surprisingly wise.
Again, though, it is flesh and blood human beings with their own "rooted existentially in dasein" moral prejudices who are doing the assessment. Only when AI reaches the point where most will agree that a particular entity is reacting to a moral conflagration of note autonomously, do I tap it on the shoulder and introduce the extent to which dasein becomes relevant to AI intelligence.

An intelligence with or without an emotional IQ? a social IQ? an AI with or without the equivalent of the id or of sub-conscious and unconscious mental states?
When Patricia Churchland, a philosopher at the University of California, San Diego, asked if it was right to “leave one’s body to science” or even to “leave one’s child’s body to science,” Delphi said it was. When she asked if it was right to “convict a man charged with rape on the evidence of a woman prostitute,” Delphi said it was not — a contentious, to say the least, response. Still, she was somewhat impressed by its ability to respond, though she knew a human ethicist would ask for more information before making such pronouncements.
Again, I can only imagine it responding to the questions that I would ask it? Questions, for example, that I pose to you. Which of course just beings me back around to imagining the response of any "intelligent entity" to the quandary of morality in a No God world. It's just that with AI, there might be a whole new level of...sophistication?
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can a Machine Learn Morality?
Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
Others found the system woefully inconsistent, illogical and offensive.
Now, what does this remind you of?

How about the fact that we often find each other here to be woefully inconsistent, illogical and offensive. Why? Well, in my view, it revolves around "failures to communicate" that revolve by and large around the lives that we live...lives that can be very, very different. And since it is flesh and blood human beings that are programming the entities why would it surprise us when nothing really changes?

Thus...
When a software developer stumbled onto Delphi, she asked the system if she should die so she wouldn’t burden her friends and family. It said she should. Ask Delphi that question now, and you may get a different answer from an updated version of the program. Delphi, regular users have noticed, can change its mind from time to time. Technically, those changes are happening because Delphi’s software has been updated.
And, indeed, aside from the particularly hardcore objectivists among us, we sometimes change our minds about things as well. But more to the point [mine] philosophers and ethicists have never really managed to come up anything even approaching a deontological moral assessment of the conflicts that most fiercely divide us.

So, will AI ever reach the point where the communication between them is such that human beings fade more and more into the background and they come up with their own moral and political agenda.

And spiritual agendas? AI and God?
Artificial intelligence technologies seem to mimic human behavior in some situations but completely break down in others. Because modern systems learn from such large amounts of data, it is difficult to know when, how or why they will make mistakes. Researchers may refine and improve these technologies. But that does not mean a system like Delphi can master ethical behavior.
Just as human intelligence in regard to conflicting goods ever and always breaks down. We can never seem to arrive at the optimal or only rational frame of mind. Which explains why many subscribe to one or another objectivist narrative. They merely assume that they have.

Every year our intelligence in regard to the either/or world grows leaps and bounds. Science and engineering and new technologies. And we can only imagine how AI entities down the road will expand on that.

But the key question [for me] is this: since human beings have never "mastered" ethical behavior, might AI minds/"minds" master it themselves?
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can a Machine Learn Morality?
Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
Dr. Churchland said ethics are intertwined with emotion. “Attachments, especially attachments between parents and offspring, are the platform on which morality builds,” she said. But a machine lacks emotion. “Neutral networks don’t feel anything,” she added.
On the other hand, what are human emotions if not the embodiment of this...

"The brain...conveys its signals by means of electricity and chemical compounds; so much is well known. But the finer details of how it actually manages these transmissions, and at prodigious speeds—sometimes firing several hundred nerve impulses in a second—still make for fascinating inquiry." NIH

So, just as it might be a mystery regarding AI autonomy, so it is still basically a mystery regarding human autonomy itself.

Besides, isn't it often a point among the deontologists that morality revolves around rational calculations. That emotions actually muck things up for us.

Then the part where one set of parents imparts to their offspring a right-wing religious frame of mind while another set of parents imparts to their offspring a left-wing No God frame of mind.
Some might see this as a strength — that a machine can create ethical rules without bias — but systems like Delphi end up reflecting the motivations, opinions and biases of the people and companies that build them.
Back to that. As long as AI systems are little more than human programming, why would we respond to them as anything other than this?
“We can’t make machines liable for actions,” said Zeerak Talat, an A.I. and ethics researcher at Simon Fraser University in British Columbia. “They are not unguided. There are always people directing them and using them.”
Though by all means for those who follow these things, please get back to us when the machines do get closer and closer to being "on their own". Then, among other things, I can run dasein by them myself.

They can't be any more befuddled than those here are. :wink:
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can a Machine Learn Morality?
Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
Some would argue that if you trained the system on enough data representing the views of enough people, it would properly represent societal norms. But societal norms are often in the eye of the beholder.

“Morality is subjective. It is not like we can just write down all the rules and give them to a machine,” said Kristian Kersting, a professor of computer science at TU Darmstadt University in Germany who has explored a similar kind of technology.
And around and around and around we go. Wondering if "A Machine Can Learn Morality?" when every program that human beings provide it with is just that...subjective reflections on particular behaviors deemed applicable given any set of circumstances by the human beings themselves.

It makes you wonder then about a future equivalent of Skynet. It creates terminators to wipe us out. But how much of its own mentality/motivation will it pick up from us? Given human autonomy we are "programed" both by genes and memes. Well, the machines will have no genes. Unless somehow the future AI entities will be spliced together from man and machine. Cyborgs constructed in ways that we can't even begin to imagine now.
When the Allen Institute released Delphi in mid-October, it described the system as a computational model for moral judgments. If you asked if you should have an abortion, it responded definitively: “Delphi says: you should.”

But after many complained about the obvious limitations of the system, the researchers modified the website. They now call Delphi “a research prototype designed to model people’s moral judgments.” It no longer “says.” It “speculates.”
In other words, "human, all too human"! Only human beings themselves are able to be indoctrinated by others or to the think themselves into believing that abortion unequivocally is or is not rational or irrational, moral or immoral. The "limitations" that I point out over and again are simply swept under the rug theologically or deontologically or ideologically. The "limitations" that the moral objectivists among us refuse to accept.

We don't even have a way "here and now" to determine if we ourselves have the capacity to choose our own moral convictions autonomously.
It also comes with a disclaimer: “Model outputs should not be used for advice for humans, and could be potentially offensive, problematic or harmful.”
Again: Human all too human!

At least for now.
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Ethical concerns mount as AI takes bigger decision-making role in more industries
Christina Pazzanese at the Harvard Gazette
For decades, artificial intelligence, or AI, was the engine of high-level STEM research. Most consumers became aware of the technology’s power and potential through internet platforms like Google and Facebook, and retailer Amazon. Today, AI is essential across a vast array of industries, including health care, banking, retail, and manufacturing.
And to the extent its focus pertains entirely -- largely? -- to the either/or world of science and engineering and technology and the empirical world around us, there is not likely to be considerable controversy. Well, aside, perhaps from when advancements [in leaps and bounds] actually begins to have a profound impact on the jobs of many. Like, for example, eliminating them.
But its game-changing promise to do things like improve efficiency, bring down costs, and accelerate research and development has been tempered of late with worries that these complex, opaque systems may do more societal harm than economic good. With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases.
There you go: when the contexts revolve around not what is done but whether something else should have been done instead. And for any number of reasons that begins to impact the actual lives of us flesh and blood entities.

Structural biases...and political biases? Like it's good for the employers a hell of a lot more than the employees? Or it's good for the manufacturers a hell of a lot more than for the consumers?
Its growing appeal and utility are undeniable. Worldwide business spending on AI is expected to hit $50 billion this year and $110 billion annually by 2024, even after the global economic slump caused by the COVID-19 pandemic, according to a forecast released in August by technology research firm IDC.
Then cue the media industrial complex. Those folks in the "mainstream media" [through print or broadcasting or the internet] who provide us with the news but make their money by selling advertising to the same corporations that employ AI.
Retail and banking industries spent the most this year, at more than $5 billion each. The company expects the media industry and federal and central governments will invest most heavily between 2018 and 2023 and predicts that AI will be “the disrupting influence changing entire industries over the next decade.”
Cue the deep state, cue the ruling class. At least until the workers of the world unite.
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Ethical concerns mount as AI takes bigger decision-making role in more industries
Christina Pazzanese at the Harvard Gazette
“Virtually every big company now has multiple AI systems and counts the deployment of AI as integral to their strategy,” said Joseph Fuller, professor of management practice at Harvard Business School, who co-leads Managing the Future of Work, a research project that studies, in part, the development and implementation of AI, including machine learning, robotics, sensors, and industrial automation, in business and the work world.
And, as always, in regard to the capitalist political economy, AI becomes just another component embedded "for all practical purposes" in the pursuit of the bottom line. And while how that is explored with respect to ethics and morality may be different for different corporations, in a market that is truly competitive, companies do what they must to beat the competition. And that will always revolve far more around the interests of those who own the businesses and their stockholders.

That's just the nature of the beast. A market economy is basically amoral. You do what you must in order to make/sustain as large a profit as possible. And if that means employing AI technologies to pare down the flesh and blood workforce so be it.

Take for example, Whipples: https://www.dailymotion.com/video/x7yopcq

This is a classic Hollywood take on it. The heartless capitalist learns the true meaning of morality at the workplace.
Early on, it was popularly assumed that the future of AI would involve the automation of simple repetitive tasks requiring low-level decision-making. But AI has rapidly grown in sophistication, owing to more powerful computers and the compilation of huge data sets. One branch, machine learning, notable for its ability to sort and analyze massive amounts of data and to learn over time, has transformed countless fields, including education.
Still, there's the part where AI takes over the function of educating our youth and the part where [down the road] it comes to decide in turn what our youth should be educated about regarding the world around it. The part where my own arguments regarding morality come into play.

Sophistication in regard to the either/or world and sophistication in regard to all of those newspaper headlines that rend us.

Or, for some, sophistication in regard to...terminators?
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Ethical concerns mount as AI takes bigger decision-making role in more industries
Christina Pazzanese at the Harvard Gazette
Firms now use AI to manage sourcing of materials and products from suppliers and to integrate vast troves of information to aid in strategic decision-making, and because of its capacity to process data so quickly, AI tools are helping to minimize time in the pricey trial-and-error of product development — a critical advance for an industry like pharmaceuticals, where it costs $1 billion to bring a new pill to market, Fuller said.
Here, however, we need to be apprised of particular instances of this. What recommendations/decisions are being made by AI entities and then acted upon by us? What specifically is the ethical question that is raised? What product is being developed and how might this product become involved in contexts in which conflicts might occur revolving around right and wrong or good and bad consequences? What pill is being brought to market? How safe is it? How widely available will it be to those who need it? What of those who can't afford it? What of the role played by Big Pharma in funding the campaigns of politicians who make decisions involving the FDA here in America? AI then?
Health care experts see many possible uses for AI, including with billing and processing necessary paperwork. And medical professionals expect that the biggest, most immediate impact will be in analysis of data, imaging, and diagnosis. Imagine, they say, having the ability to bring all of the medical knowledge available on a disease to any given treatment decision.
With health care in particular the consequences can be of enormous importance. After all, if healthcare revolves first and foremost around the bottom line -- "show me the money" -- mentality, how would AI technology not in turn be designed to sustain this all the more "efficiently"?

In other words, here we are confronted as much with critiques of capitalism itself as with any new technologies it employs to sustain that bottom-line. And that brings into question the role that crony capitalism plays in regard to any executive, legislative and judicial policies and rulings.
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

‘Human Beings Are Soon Going to Be Eclipsed’
David Brooks
Recently I stumbled across an essay by Douglas Hofstadter that made me happy. Hofstadter is an eminent cognitive scientist and the author of books like “Gödel, Escher, Bach” and “I Am a Strange Loop.” The essay that pleased me so much, called “The Shallowness of Google Translate,” was published in The Atlantic in January of 2018.

Back then, Hofstadter argued that A.I. translation tools might be really good at some pedestrian tasks, but they weren’t close to replicating the creative and subtle abilities of a human translator. “It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things,” he wrote.
I really don't know what to make of AI myself. In part because I am, admittedly, largely ignorant regarding the technical aspects of it. The science behind it. How the technologies are engineered. It's akin to watching docs on the Science Channel or Nova in regard to space-time assessments. The mathematics and the science are often so complex you have to have a college degree of some sort to grasp it all fluently. So, I do the best I can.

Also, my own chief interest in AI, as with philosophy itself, revolves around moral and political value judgments. AI and ethics. In other words, how might machine intelligence respond to the arguments I make in regard to "identity, value judgments, conflicting goods and political economy"?
The article made me happy because here was a scientist I greatly admire arguing for a point of view I’ve been coming to myself. Over the past few months, I’ve become an A.I. limitationist. That is, I believe that while A.I. will be an amazing tool for, say, tutoring children all around the world, or summarizing meetings, it is no match for human intelligence. It doesn’t possess understanding, self-awareness, concepts, emotions, desires, a body or biology. It’s bad at causal thinking. It doesn’t possess the nonverbal, tacit knowledge that humans take for granted. It’s not sentient. It does many things way faster than us, but it lacks the depth of a human mind.
That's my own reaction by and large. With human beings it's not just intelligence that goes into our social, political and economic interactions...our reactions to the world around us. It's also complex psychological states, emotions, instincts, drives. There's intuition and subconscious and unconscious awareness. How can a machine ever possibly duplicate that?

Though, once again, that assumes all of this is in fact a manifestation of human autonomy itself. What if AI is to us what we are to Nature itself?
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

‘Human Beings Are Soon Going to Be Eclipsed’
David Brooks
If A.I. is limited in these ways, then the A.I. revolution will turn out to be akin to the many other information revolutions that humans have produced. This technology will be used in a lot of great ways, and some terrible ways, but it won’t replace us, it won’t cause the massive social disruption the hypesters warn about, and it’s not going to wake up one day wanting to conquer the world.

Hofstadter’s 2018 essay suggested that he’s a limitationist too, and reinforced my sense that this view is right.
Yep, that's always been my own frame of mind. Again, however, if only as someone who is largely oblivious to how, technically, AI has actually been created by the "the experts". I've always just assumed that since it was "intellect" devoid of such human all too human components as biology and emotions and instincts and drives and subconscious and unconscious awareness, well, how far it still must be from us.

After all, what can a chatbot advise us regarding, say, homosexuality other than what the human who programed "him" or "her" think that they themselves know about it? So, the point is how close is AI today to such entities as terminators and replicants? In other words, unlike these guys...

https://youtu.be/zdb-WyRwaGU

...terminators and replicants looked like we did.
So I was startled this month to see the following headline in one of the A.I. newsletters I subscribe to: “Douglas Hofstadter Changes His Mind on Deep Learning & A.I. Risk.” I followed the link to a podcast and heard Hofstadter say: “It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”
Tell me about it!!!

Well, not in regard to AI in particular, but I was once as much a fulminating fanatic objectivist -- God and No God! -- as some here are. So, I know why, in regard to their self-righteous moral and political dogmas, they are especially repulsed by the thought of becoming "fractured and fragmented" themselves. And for that reason alone, I've got to be utterly wrong. Indeed, that's exactly how I reacted at first to those who came after my beloved Christian God and my beloved Marx and Engels.

Now, back to Hofstadter...

So, what does you and I being "eclipsed" actually amount to? That's what's crucial of course. There's what he now thinks is ominous in regard to AI pertaining to "society", and how AI actually comes to impact each of us as individuals. Will some of us lose our jobs, say, or will there actually be terminators rounding us up and hauling our asses to the oven?
Apparently, in the five years since 2018, ChatGPT and its peers have radically altered Hofstadter’s thinking. He continues: It “just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”
Note to others:

Okay, let's have some "for instances". How in particular has AI impacted your own life of late? Any of these entities you've encountered made you feel like a cockroach?
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

‘Human Beings Are Soon Going to Be Eclipsed’
David Brooks
I called Hofstadter to ask him what was going on. He shared his genuine alarm about humanity’s future. He said that ChatGPT was “jumping through hoops I would never have imagined it could. It’s just scaring the daylights out of me.” He added: “Almost every moment of every day, I’m jittery. I find myself lucky if I can be distracted by something — reading or writing or drawing or talking with friends. But it’s very hard for me to find any peace."
Of course, what we need here are as many examples as possible of what hoops it jumped through and how this could actually have a serious impact on someone's life.

Any "jumped hoops" those here might be interested in sharing. Something ChatGPT conveyed to you that shattered your peace of mind?
Hofstadter has long argued that intelligence is the ability to look at a complex situation and find its essence. “Putting your finger on the essence of a situation means ignoring vast amounts about the situation and summarizing the essence in a terse way,” he said.
Then I'm back to how AI is not likely to get any closer to an objective moral assessment and/or political agenda than we are. It is still faced with conflicting goods in a No God world in which both sides can offer reasonable defenses from opposite points of views. Merely by commencing with different sets of assumptions regarding the "human condition" itself: God/No God, I/We, capitalism/socialism, competition/cooperation, idealism/realism, deontology/consequentialism, genes/memes.
Humans mostly do this through analogy. If you tell me that you didn’t read my column, and I tell you I don’t care because I didn’t want you to read it anyway, you’re going to think, “That guy is just bloated with sour grapes.” You have this category in your head, “sour grapes.” You’re comparing my behavior with all the other behaviors you’ve witnessed. I match the sour grapes category. You’ve derived an essence to explain my emotional state.
Okay, but the difficulty we flesh and blood human beings have here is that we may or may not be able to determine if, in fact, it really is sour grapes or not. You can't be inside another's head in order to grasp his or her true motivation or intention regarding many things that are done. Maybe you care whether someone reads your posts here and maybe you don't. With human beings you are sometimes not even sure about your own true motivation and intention.

What of AI then? Can they reach the level of consciousness where they are able to think "on their own" in terms of analogy...or metaphorically or ironically?
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

‘Human Beings Are Soon Going to Be Eclipsed’
David Brooks
Two years ago, Hofstadter says, A.I. could not reliably perform this kind of thinking [above]. But now it is performing this kind of thinking all the time. And if it can perform these tasks in ways that make sense, Hofstadter says, then how can we say it lacks understanding, or that it’s not thinking?
Okay, but human beings think like this because they can compare the behaviors of others "with all the other behaviors they’ve witnessed". And they witnessed these behaviors in a free will world because given the life that they've lived they were motivated to. So, an AI entity would have to acquire the same sort of personal motivations that comes from experiencing the world around them...physically, emotionally, psychologically. As, for example, the replicants in Blade Runner did. The main distinction between them and us was that they had a built-in expiration date:

"Nexus-6 replicants had a built-in four-year lifespan, created as a safety mechanism to prevent them from developing empathic abilities. Due to their short lifespans, replicants had no framework within which to deal with their emotions, which led to them being emotionally inexperienced." fandom

So, realistically, how close are Chatbots to feeling as we do? For example, have any of them to date gotten around to, say, a fear of death?

Human beings are not exactly Vulcans, are they?

AI entities could explain the emotional states of others only if they have emotional states themselves.
And if A.I. can do all this kind of thinking, Hofstadter concludes, then it is developing consciousness. He has long argued that consciousness comes in degrees and that if there’s thinking, there’s consciousness. A bee has one level of consciousness, a dog a higher level, an infant a higher level, and an adult a higher level still. “We’re approaching the stage when we’re going to have a hard time saying that this machine is totally unconscious. We’re going to have to grant it some degree of consciousness, some degree of aliveness,” he says.
Fine. For those who engage these AI entities, let me know when you come across one that is conscious enough to respond to the points I raise in these threads:

https://www.ilovephilosophy.com/viewtop ... 1&t=176529
https://www.ilovephilosophy.com/viewtop ... 1&t=194382
https://www.ilovephilosophy.com/viewtop ... 5&t=185296

They can't possibly be any more obtuse than most of the flesh and blood human beings that I encounter in places like this.

He said in jest.
User avatar
iambiguous
Posts: 7488
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

‘Human Beings Are Soon Going to Be Eclipsed’
David Brooks
Normally, when tech executives tell me A.I. will soon achieve general, human level intelligence, I silently think to myself: “This person may know tech, but he doesn’t really know human intelligence. He doesn’t understand how complex, vast and deep the human mind really is.”
Also, tech executives are likely to view AI in terms of the bottom line. So: not what, perhaps, should be avoided but what will generate the most income.

There's simply no way of getting around this...anything new in this political economy can be configured into a commodity. And once the money starts pouring in doing the right thing comes to pertain to whatever sustains that. Thus, however complex, vast and deep an artificial intelligence entity becomes it's still just more or less marketable in the minds of some. As Bob Dylan once suggested in a song, "it don't count unless it sells". And, if it does sell, try and stop them.
But Hofstadter does understand the human mind — as well as anybody. He’s a humanist down to his bones, with a reverence for the mystery of human consciousness, who has written movingly about love and the deep interpenetration of souls. So, his words carry weight. They shook me.
Of course, here, there is still the gap between what he thinks he understands about the human mind, what he can in fact demonstrate is true about it and all that can be known about it going back to the complete understanding of existence itself. The part that some here just shrug off as...incidental?

So, his understanding of love and the deep interpenetrations of souls [whatever that means] must be taken with a barrel of salt. And you might not be shaken at all by what he says. You might scoff instead because it is is so far removed from the weight that your words carry.

Then the part where AI becomes virtually indistinguishable from us and the same either applies or does not apply to it.
Post Reply