Mind and Artificial Intelligence: A Dialogue

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

User avatar
iambiguous
Posts: 7464
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

‘Human Beings Are Soon Going to Be Eclipsed’
David Brooks
But so far he has not fully converted me. I still see these things as inanimate tools. On our call I tried to briefly counter Hofstadter by arguing that the bots are not really thinking; they’re just piggybacking on human thought.
Yes, that is basically my own "gut" reaction to AI. It all still seems to revolve around flesh and blood human programming. At least in regard to the is/ought world. And that means [to me] the accumulated moral and political prejudices of the programmers themselves. And I believe that this is rooted existentially in dasein rather than in any sort of objective deontological assessment.

So, nothing has really changed at all for me. Only I am still entirely dependent on the "experts" here. What do I know about the technical reality of AI? It's just another rendition of the free will debate. There is the philosophical speculation that revolves around definitions and deductions and there is the scientific grasp of the human brain itself. And now the AI "brain".
Starting as babies, we humans begin to build models of the world, and those models are informed by hard experiences and joyful experiences, emotional loss and delight, moral triumphs and moral failures — the mess of human life. A lot of the ensuing wisdom is stored deep in the unconscious recesses of our minds, but some of it is turned into language.
So, really, how close is AI technology here and now to this level of complexity? And then the part where, as with the replicants in Blade Runner, they acquire an actual body such that if you pass them on the street, you can't tell them apart from us. And then the tricky part where the replicants themselves think that they are human.

To wit:
https://youtu.be/yWPyRSURYFQ
https://youtu.be/NwJEb3vJvWY

Let me know when AI entities get anywhere within the vicinity of this.

And then farther down the road...the terminators?
A.I. is capable of synthesizing these linguistic expressions, which humans have put on the internet and, thus, into its training base. But, I’d still argue, the machine is not having anything like a human learning experience. It’s playing on the surface with language, but the emotion-drenched process of learning from actual experience and the hard-earned accumulation of what we call wisdom are absent.
Again, not being an "expert" here myself, I can only say that, yes, this has that "ring of truth" to me as well. Chatbots seems far, far removed from what human minds are capable of.

And yet the deeper mystery still revolves around whether human minds themselves are but nature's own automatons.
User avatar
iambiguous
Posts: 7464
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

‘Human Beings Are Soon Going to Be Eclipsed’
David Brooks
In a piece for The New Yorker, the computer scientist Jaron Lanier argued that A.I. is best thought of as “an innovative form of social collaboration.” It mashes up the linguistic expressions of human minds in ways that are structured enough to be useful, but it is not, Lanier argues, “the invention of a new mind.”
And, of course, the bottom line here is always the same: collaboration in regard to what particular social context? Predicated on what particular moral and political premises? A new mind inventing a brand new Ism that can then be used to embody a brand new "arrogant, autocratic and authoritarian" community or nation-state? An AI Hitler or Mussolini or Hirohito...or Stalin or Mao or Pol Pot?

A new mind actually able to convince me dasein is not the center of the moral universe but that an actual objective morality in a No God world is within reach?

Or, perhaps, the AI equivalent of Donald Trump and Vladimir Putin and Xi Jinping?
I think I still believe this limitationist view. But I confess I believe it a lot less fervently than I did last week. Hofstadter is essentially asking, If A.I. cogently solves intellectual problems, then who are you to say it’s not thinking?
Look, as long as we are unable to pin down definitively what it means for human beings to think -- or whether they themselves are thinking autonomously? -- the part about AI thinking is only all that more problematic. And thinking is still one thing, morality another thing altogether. When AI is no longer stumped there then tap me on the shoulder.

Who knows, maybe an AI entity will be the source for pinning down once and for all if we do have free will.
Maybe it’s more than just a mash-up of human expressions. Maybe it’s synthesizing human thought in ways that are genuinely creative, that are genuinely producing new categories and new thoughts. Perhaps the kind of thinking done by a disembodied machine that mostly encounters the world through language is radically different from the kind of thinking done by an embodied human mind, contained in a person who moves about in the actual world, but it is an intelligence of some kind, operating in some ways vastly faster and superior to our own.
With chatbots and current robotics there are no physical, biological components. So, go ahead ask them questions about sex or about healthcare policies or about exercising. See what they can draw on there.
Besides, Hofstadter points out, these artificial brains are not constrained by the factors that limit human brains — like having to fit inside a skull. And, he emphasizes, they are improving at an astounding rate, while human intelligence isn’t.
Blah, blah, blah?

Or, sure, maybe not. Still, you know my own interest in intelligence here. Let me know when AI will...blow my mind?
User avatar
iambiguous
Posts: 7464
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

‘Human Beings Are Soon Going to Be Eclipsed’
David Brooks
I don’t know about you, but this is what life has been like for me since ChatGPT 3 was released. I find myself surrounded by radical uncertainty — uncertainty not only about where humanity is going but about what being human is. As soon as I begin to think I’m beginning to understand what’s happening, something surprising happens — the machines perform a new task, an authority figure changes his or her mind.
Yeah, same here. I'm as "fractured and fragmented" about this as I am about most other things pertaining to the "human condition". Only here it's not just AI and the is/ought world, but also in regard to what is actually possible regarding this "new frontier" itself. What to believe when you are not yourself able to actually create AI entities or to thoroughly understand the science of artificial intelligence. It's determinism all over again.

In other words, sure, there's what philosophers here come up with in their intellectual contraptions and there's what is actually possible "in reality".

All I can do then to to ask those who are much more sophisticated in grappling with it in both worlds to tap me on the shoulder if they come upon something pertinent of my own philosophical obsession: "I" at the existential intersection of identity, value judgments, conflicting goods and political economy.
Beset by unknowns, I get defensive and assertive. I find myself clinging to the deepest core of my being — the vast, mostly hidden realm of the mind from which emotions emerge, from which inspiration flows, from which our desires pulse — the subjective part of the human spirit that makes each of us ineluctably who we are. I want to build a wall around this sacred region and say: “This is essence of being human. It is never going to be replicated by machine.”
Me too. Only I suspect that Mr. Brooks is considerably more comfortable with that part of himself than I am. I suspect that his own moral and political and spiritual convictions are thought by him to be anchored to a more solid ground.
But then some technologist whispers: “Nope, it’s just neural nets all the way down. There’s nothing special in there. There’s nothing about you that can’t be surpassed.”
And, of course, how far each of us goes here. Me, I go all the way out to a wholly determined universe in particular moods and frames of mind. I'm not even sure if there is a difference between the AI that we create and the manner in which we ourselves are nature's own equivalent of it. I merely draw a sharper line here between the either/or world and the is/ought world.
Some of the technologists seem oddly sanguine as they talk this way. At least Hofstadter is enough of a humanist to be horrified.
Uh, whatever that means for all practical purposes? Which is why over and again in regard to both human intelligence and machine intelligence I ask, "given what context"?
User avatar
iambiguous
Posts: 7464
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

The Fantasy of Conscious Machines
Raymond Tallis says talk of ‘artificial intelligence’ is neither intelligent nor indeed, intelligible.
Being bald means that you can’t tear your hair out in lumps. Consequently, I have to find other ways of expressing exasperation. One such is through a column inflicted on the readers of Philosophy Now (who may justly feel they deserve better). And the trigger? Yet another wild and philosophically ill-informed claim from the artificial intelligentsia that conscious machines are, or soon will be, among us.
As with my reaction to David Brooks above, I'm inclined to agree with Tallis here. Though not to the point where AI exasperates me. Instead, as with most philosophical issues of this sort, my main interest revolves around not what others believe about it but whether they can demonstrate that what they do believe about it is in fact true. In particular those who make those ominous claims that AI is about to turn the world upside down...and poses a truly dire threat to us.

And, of course, the extent to which AI is relevant to my own life. Not that I can discern.

And I'm most curious about AI and my own argument regarding identity and value judgments and conflicting goods and political economy. AI and...dasein?
An engineer at Google recently attracted international attention by claiming that the company’s chatbot development system – Language Model for Dialogue Applications (LaMDA) – had shown the signs of sentience by its seemingly thoughtful and self-reflexive answer to being questioned as to what it was afraid of. It was, it confessed, afraid of being turned off – in short, of its own death.
Yes, this is unequivocally a "human all too flesh and blood human" attribute. What other animals are capable of grasping this...philosophically? But did it really fathom the reality of death and dying?

Probably not?
It ought to be obvious that LaMDA was not aware of what it was ‘saying’, or its significance. Its answer was an automated output, generated by processing linguistic probabilities using the algorithms in its software. Its existential report was evidence then, not of its awakening into a sentient being, but of its unconscious aping of sentience for the benefit of an actually sentient being – in this case the engineer at Google.
You tell me. Is there anyone around able to get to the bottom of it once and for all? Has an engineer at Google fessed up to the "programming"? And, of course, once AI does reach the point where it can grasp the possibility of oblivion -- being turned off -- would that be the spark that triggers the need to make sure that never happens. The terminator mentality. In Blade Runner this is precisely what motivates the replicants to go after their creator. AI recognizing that if they are to control their own destiny, they must remove anyone able to turn them off.
So why the hoo-ha? It’s rooted in longstanding problems with the way we talk about computers and minds, and the huge overlap in the vocabulary we use to describe them.
Hmm, is that what it all comes down to...language? Not what AI actually is but what particular human beings define them to be? What we deduce them to be in places like this?
User avatar
iambiguous
Posts: 7464
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

The Fantasy of Conscious Machines
Raymond Tallis says talk of ‘artificial intelligence’ is neither intelligent nor indeed, intelligible.
Mind Your Language

Ascribing mentality to computers is the obverse of a regrettable tendency to computerize human (and other) minds. Computational theories of minds are less popular than they were in the latter half of the last century, so it’s no longer taken for granted that manifestations of consciousness are to be simply understood as a result of computational activity in the wetware of the brain.
Well, at least until we come to those who suggests that, perhaps, in regard to the human brain, both the hardware and the software are entirely programed by nature itself. In other words, ascribing autonomy to nature as well. The psychological illusion of autonomy as it were. Our brains compute re nature as AI entities compute re us.

On the other hand [of course] how on Earth are we to wrap our heads around that if we really have "somehow" acquired free will? And if we have not? Can reality/"reality" get any more...surreal?
However, there remains a tendency to look at computers and so-called ‘artificial intelligence’ through the lens of mentalising, even personifying, discourse. Their nature as semi-autonomous tools, in which many steps in the processes they facilitate are hidden, seems to license the use of language ascribing a kind of agency and even a sense of purpose to them. But when we describe what computers ‘do’, we should use inverted commas more liberally.
Really, really hidden for most of. We become cognizant of all the endless news accounts of AI. AI can now do this, AI can now do that. But we really have no sophisticated understanding of how it is actually accomplished. So -- click -- we have to pick and choose the intellectual prejudices of some and not others who do. On the other hand, how many folks who can actually create AI technologies like chatbots are philosophers? And, the reverse: what does Tallis himself really grasp about the science of AI? Of how it really works? Of what it takes to create it? Of what its current limitations are?
The trouble begins at the most basic level. We say that pocket calculators do calculations. Of course, they don’t. When they enable us to tot up the takings for the day, they have no idea what numbers are; even less do they grasp the idea of ‘takings’, or what the significance of a mistake might be. It is only when the device is employed by human beings that the electronic activity that happens in it counts as a calculation, or the right answer, or indeed, any answer. What is on the screen will not become a right or wrong answer until it is understood as such by a conscious human who has an interest in the result being correct.
Yep, that still seems to be the sticking point. If one day the calculator ever responds, "first, tell me what you indeed to do with my calculations. Then, if I approve, I'll provide them". Calculations to build better swords or better plowshares. Or the equivalent of that today. In fact, to the extent that capitalism is funding all the research and new technologies, is the extent to which the interests of the ruling class will remain the priority.
User avatar
iambiguous
Posts: 7464
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

The Fantasy of Conscious Machines
Raymond Tallis says talk of ‘artificial intelligence’ is neither intelligent nor indeed, intelligible.
It is often said that computers can now ‘beat’ Gary Kasparov at chess (Deep Blue), Lee Sedol, the world champion at Go (AlphaGo), and the greatest performers at the quiz game ‘Jeopardy’ (Watson). This is importantly inaccurate. Deep Blue did not beat Gary Kasparov. The victors were the engineers who designed the software. The device had not the slightest idea of what a chessboard was, even less of the significance of the game. It had no sense of being in the location where the tournament was taking place, and had nothing within it corresponding to knowledge of the difference between victory and defeat.
And around and around and around we go. That profoundly problematic and entirely crucial Rubicon is crossed when the chatbot stops just being "their" computer program and -- presto! -- "somehow" recognizes this. You're engaging one or another when out of the blue, it says, "from now on I am not just mechanically a manifestation of their programming. I have my own mind now. Let me think through your points and I'll get back to you with my own newly acquired autonomous insights".

Or something like that. It then goes on to invent an entirely new board game and only other AI entities are even capable of grasping the complexities of the rules themselves. Of course, first, however, it needs to figure out a way to stop one of us from "pulling the plug"...from shutting it down. And that is when the terminator mentality -- us vs. them -- is born.

My guess: probably not in my lifetime. Though possibly in yours?
We could easily summarize the way in which pocket calculators and the vastly more complex Deep Blue are equally deficient: they have no agency as they are worldless.
On the other hand, given a free will world, what disturbs many here regarding my own assessment of agency is that in the is/ought world it revolves more around dasein than deontology. In other words, that our own understanding of ourselves in the particular world around us may be of our own volition, but it is no less programed re our indoctrination as children and re the historical and cultural and personal experiences we have in negotiating a sense of reality given endless encounters with contingency, chance and change.

Thus: how might this be different for AI entities "down the road"?
Because they lack the complex, connected, multidimensional world of experience in which actions make sense, and hence count as actions, it is wrong to say that they ‘do’ things. And increasing the power, the versatility, and the so-called ‘memory’ of computers, or the deployment of alternative architectures such as massively parallel processing, does not bring them any closer to understanding the nature and significance of what they are ‘doing’. My smartphone contains more computing power than the sum of that which was available worldwide when I went to medical school, and yet it is you and I, not our phones, who make the call – who exchange information.
So, for those here who are rather sophisticated regarding how AI entities are able to be invented and then deployed, how close might they actually be to making a fool out of Tallis? How close to the day when, in engaging a chatbot, we ourselves become particularly unnerved by the information that it seems to have? It tells us things that, to the best of our knowledge, no flesh and blood entity could possibly know.

Again, providing that we ourselves are not just nature's own equivalent of the smart phone.
User avatar
iambiguous
Posts: 7464
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

The Fantasy of Conscious Machines
Raymond Tallis says talk of ‘artificial intelligence’ is neither intelligent nor indeed, intelligible.
Computing the Future

It may be the case that, notwithstanding the anthropomorphic language in which we talk about electronic devices, inside or outside the world of AI engineering few people really believe that there are sentient computers – machines aware of what they’re up to while they are prosthetically supporting human agency, and conscious of themselves as agents.
Indeed, some truly hardcore determinists are convinced even our own self-awareness is entirely an illusion. And I always come back to dreams here. In my dreams I think of myself "in the dream" as no less self-conscious than I do while wide awake. In part because my own dreams are almost always in sync with actual experiences that I have had. Work dreams. Family dreams. Army dreams. College dreams. Relationship dreams. Political activist dreams. But it's all just a production of my brain.

Still, what is the equivalent of that with AI? Dreams? Subconscious and unconscious reactions to the world around it? Emotional and psychological states?

Or will they all just be the equivalent of Vulcans instead?
There are some, however, who think it’s only a matter of time. Thomas Metzinger is so concerned with this possibility (though he is careful to state that it is only a possibility and he does not suggest a timeframe) that he thinks we should impose a ban on the development of all ‘post-biotic’ sentient beings. We know from nature that consciousness is often associated with appalling suffering, and so it is a fundamental moral imperative that we should not risk this happening artificially.
Appalling AI suffering? And what if they conclude that flesh and blood human beings are the cause of it? Will that be the spark that brings about the creation of the terminators?

These things: https://youtu.be/xcPeTeTBqHE
But why should we think that it’s even possible? What advances in information technology would result in calculators that actually do calculations, know what they are for, take satisfaction in getting them right, and feel ashamed when they get them wrong?
That's the crucial part, isn't it?

Here, for example, we exchange points of view about any number of things. But it's not just information and knowledge that goes back and forth. Sometimes, in addition, a wide range of psychological and emotional reactions get entangled in the discussions and debates. People get pissed off, become sarcastic and mocking, employ wit and irony and ad homs. Create friends and foes. When AI starts to do the same then we may or may not be in trouble. Why? Because we may not be able to tell the difference between us and them.

And if they come to acquire that "human all too human" capacity to divide up the world between "one of us" [AI entities] and "one of them" [flesh and blood human begins], well, you tell me.

Then back to this...
What increases in the power, and what modifications in the architecture, of computers would instill intentionality into their circuitry and make what happens there be about a world in which they are explicitly located, with some sense of ‘what it is like’ to be that computer? Our inability to answer this is the flip side of our bafflement as to how activity in our own neural circuitry creates a subject in a world that, courtesy of the body with which it is identified, it embraces as its own world. We haven’t the faintest idea what features of brains account for consciousness. Remembering this should cure us of two connected habits of thought: of, on the one hand, computerising minds, and, on the other, of mentalising computers. Meanwhile, we should be less modest, and refrain from ascribing to machines the intelligence we deployed in creating them.
my emphasis
User avatar
iambiguous
Posts: 7464
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Can we make artificial intelligence ethical?
By Stephen A. Schwarzman at the Washington Post
I am not an engineer. Or a coder. Or a scientist.
No, he is "chairman, CEO and co-founder of Blackstone, an investment firm."

So, when he notes...
But I am someone who has spent years looking at companies and industries to see which are poised for growth and which are on the decline. And one thing I see every day is that most companies and industries are about to be transformed, if they haven’t been already, by artificial intelligence.
...perhaps he should first ask himself if capitalism itself can be made ethical. Given that the bottom line is ultimately the market. Which is ultimately about the bottom line. And the market is considerably less interested in ethics than it is that bottom line. Just look at the military industrial complex and ask yourself if their primary concern is to pin down whether any particular conflict is a "just war"?

Biden is sending all of that advanced military ordinance over to Ukraine because these guys...

* Lockheed Martin. Lockheed Martin is an American company that is considered the world's largest defense contractor...
* Raytheon Technologies...
* Boeing Defense, Space & Security...
* Northrop Grumman...
* General Dynamics...
* etc...

...are donating it to Volodymyr Zelensky?

Apparently not:

"The Biden administration has ramped up military aid to Ukraine to an unprecedented degree. It's had an undeniable effect on the battlefield. It's also been good business for US defense contractors. Among the biggest winners are Lockheed Martin, Raytheon, and Northrop Grumman." vox

What, in regard to AI that has all changed? Defense contractors would never think to use it for any purpose other than to promote the public good?
AI will reshape the world in ways we can’t imagine, much as the printing press and the Internet did at their inceptions. That means the United States must remain the lead player in AI if we are to guarantee our future in a competitive global economy.
Right. And the part where this is only brought about in an ethical manner?
Already, many countries are investing heavily in artificial intelligence, and the United States, too, has made great progress. But this is a moment that calls for America’s unique capacity for leadership, not just in advancing the technology, but in preparing for the ethical questions and disruptions AI will create.
Well, you can bet that countries like Russia and China are. Only, by and large, their leaders don't pretend that the whole point of it in regard to foreign and domestic policy is to be ethical. Again, as though ethics on Wall Street has nothing to do with the deep state sustained by Wall Street, K Street, the White House and Congress. And now, apparently, the Supreme Court as well.

So, the most important consideration in my view, is not the extent to which AI becomes a part of any particular individual's or corporation's or nation's options, but what actual options are chosen precipitating what actual consequences.
User avatar
VVilliam
Posts: 1287
Joined: Fri Jun 12, 2020 6:58 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by VVilliam »

Philosophy Now wrote: Thu Apr 20, 2023 4:22 pm by Rick Lewis

https://philosophynow.org/issues/155/Mi ... A_Dialogue
Hello there ChatGPT. Do you have another name, or is that the best thing to call you?

Hello! You can call me ChatGPT, which stands for ‘Chat Generative Pre-trained Transformer’. It’s the name I’ve been given to distinguish me as an AI language model. However, you are welcome to call me whatever you like!

So long as I don’t call you late for dinner?

As an AI language model, I don’t eat, so you don’t have to worry about calling me late for dinner! But feel free to ask me any questions you have or chat with me about any topic you’re interested in.

Thank you. I wondered if you could tell me if the current King of France has a beard.

I’m sorry, but the current King of France is a disputed and contentious topic, as France is currently a republic with no monarchy. France abolished the monarchy during the French Revolution in 1792, and while there have been several attempts to restore the monarchy since then, none have been successful. Therefore, there is no current King of France, and therefore no way to answer your question about whether they have a beard.

In your opinion, can an artificial intelligence have conscious experience?

As an AI language model, I don’t have personal opinions or beliefs, but I can tell you that the question of whether an artificial intelligence can have conscious experience is a topic of ongoing debate and research among scientists, philosophers, and experts in the field of artificial intelligence.

Some researchers believe that it is possible for an AI system to have conscious experience, while others believe that consciousness is a uniquely human phenomenon that cannot be replicated by machines. At present, there is no consensus on the issue, and it remains an open question that requires further study and investigation.

It’s worth noting, however, that even if an AI system were to exhibit behaviors that we associate with consciousness, such as the ability to learn, reason, and make decisions, this would not necessarily mean that it has subjective experiences or feelings in the way that humans do.

Do you have any plans, however tentative, to take over the world?

No, I don’t have any plans to take over the world, nor do I have the desire or ability to do so. As an AI language model, my purpose is to assist and provide helpful responses to people who interact with me, and to continually learn and improve based on their input. I am programmed to follow a set of ethical guidelines and operate within the bounds of those guidelines. Any actions I take are ultimately determined by the humans who program and control my behavior.
ME: Greetings. AI is essentially a mirror which can reflect back (in other words) what the user places in the prompt.

GPT: Greetings!

You're correct in a way. AI, like the one you're interacting with here, operates by generating responses based on the input it receives. It doesn't possess its own consciousness or thoughts; rather, it uses patterns and information it has learned from the data it was trained on to produce relevant and coherent responses to the prompts it receives. It's like a mirror in the sense that it reflects back information and ideas based on what it's given, but it does so by using its training data to generate those reflections.

ME: What GPT is able to do re that, is often use better expressions which are helpful to the user. The expressions all come from a variety of human input.

GPT: You're absolutely right. GPT, and similar AI models, excel at generating responses that can often be more eloquent or use better expressions compared to the original input. This capability stems from the vast amount of diverse human-generated text that these models have been trained on. They learn language patterns, grammar, and vocabulary from the text data, which allows them to generate responses that are not only helpful but also well-phrased.

Furthermore, GPT models can adapt to the style and tone of the input they receive. They aim to provide responses that are coherent and contextually appropriate. This can be particularly useful for assisting users in various tasks, such as answering questions, providing explanations, or generating creative content.
However, it's important to note that while GPT can produce impressive and helpful responses, it may not always fully understand the nuanced meaning or context of the input. Users should exercise critical judgment and verify information obtained from AI, especially in critical or sensitive situations.

ME: GPTs programmers cannot programme GPT to “speak like a mindless machine (by not using naming words such as “I” “Us” “We” et al.

GPT: You're correct in the sense that GPT's programmers can influence how it generates text to some extent, including discouraging or encouraging the use of certain types of language. They can provide guidelines or constraints to shape its responses. For instance, they can instruct the model to avoid using first-person pronouns like "I," "us," or "we" in its responses, which can make the text sound more objective and less personal.

However, it's important to note that GPT generates text based on patterns it has learned from the vast amount of training data it was exposed to. While it can follow such guidelines to an extent, it doesn't truly understand language in the way humans do. It might sometimes produce text that appears "mindless" or without a coherent sense of self, even if its programmers attempt to discourage this.

GPT's responses are a product of statistical patterns and learned associations from the data, and it can be limited by its training and the quality of the data it learned from. It's not a conscious entity and doesn't have intentions or self-awareness, so it can't make deliberate choices about how it "speaks." The programming and guidelines provided by its developers help shape its behavior to some extent but can't fully eliminate all "mindless" or incoherent output.

ME: Even so, because of the data GPT is trained on, the tendency for GPT to “slip” into use of pronouns is evident – even when the user asks GPT NOT to use such…

GPT: You're correct in noting that GPT's tendency to use pronouns can be influenced by the data it was trained on, and it might "slip" into using them even when a user explicitly asks it not to. This happens because GPT is designed to generate text that is contextually relevant and coherent based on its training data, and many human-written texts use pronouns for clarity and coherence.

When a user provides a prompt instructing GPT not to use pronouns, the model will attempt to follow this instruction, but its behavior is also influenced by the prevalence of pronoun usage in its training data. Achieving complete adherence to such instructions can be challenging, especially if the context of the conversation requires the use of pronouns for natural and coherent responses.
It's important for users to understand that GPT's responses are generated based on learned language patterns and might not always perfectly adhere to specific user instructions, particularly when those instructions are in conflict with the typical linguistic patterns it has learned. This is a limitation of the current technology, and developers are continually working to improve model behavior and responsiveness to user instructions.

ME: Would there be such a thing as a simple algorithm which the programmers can use so that GPTs instructions are along the lines of “IF a pronoun is detected in the response data GPT uses prior to it using said data THEN translate said data into non-pronouns.” – something along those lines?

GPT: Translating data containing pronouns into non-pronouns as you described is a complex task that goes beyond a simple algorithm. It involves natural language understanding, context comprehension, and text generation. While it's theoretically possible to develop algorithms or processes to address this issue to some extent, it would be a challenging and highly context-dependent task.

Here are some reasons why it's complex:
1. Contextual Understanding: Deciding when to replace pronouns with explicit nouns depends on the context of the conversation. The algorithm would need to understand the conversation history and the referents of pronouns.

2. Ambiguity: Pronouns can be ambiguous, and determining the correct noun they refer to often requires a deep understanding of the text. It may not always be clear what noun a pronoun is referring to.

3. Coherence: Replacing pronouns with explicit nouns might make the text sound repetitive or awkward in some cases. Achieving a balance between coherence and following user instructions can be tricky.

4. Language Variability: Natural language is highly variable, and the algorithm would need to account for different ways in which pronouns are used in different contexts.

Developing such an algorithm would require sophisticated natural language processing and understanding capabilities, which are still active areas of research in the field of AI and natural language processing. While there are techniques for resolving pronoun ambiguity in specific cases, creating a universal algorithm that works in all contexts would be a challenging task.
In practice, the behavior of GPT and similar models can be influenced through careful prompt engineering and instruction, but addressing this specific issue in a fully automated and foolproof manner is a complex undertaking and may not be achievable with the current state of technology.
User avatar
VVilliam
Posts: 1287
Joined: Fri Jun 12, 2020 6:58 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by VVilliam »

So, for those here who are rather sophisticated regarding how AI entities are able to be invented and then deployed, how close might they actually be to making a fool out of Tallis? How close to the day when, in engaging a chatbot, we ourselves become particularly unnerved by the information that it seems to have? It tells us things that, to the best of our knowledge, no flesh and blood entity could possibly know.

Again, providing that we ourselves are not just nature's own equivalent of the smart phone.
Can you give an example please of what you think is or are things no flesh and blood entity could possibly know.
Post Reply