AI is not I

For all things philosophical.

Moderators: AMod, iMod

Advocate
Posts: 3472
Joined: Tue Sep 12, 2017 9:27 am
Contact:

AI is not I

Post by Advocate »

I've tried a number of AI chatbots and none are anything more than rhetoric. They're great at presenting an apparently meaningful response and can do nothing related to original thought, adaptation, or continuity. In short, they lack even the basic prerequisites for being considered intelligent.
Skepdick
Posts: 14603
Joined: Fri Jun 14, 2019 11:16 am

Re: AI is not I

Post by Skepdick »

Advocate wrote: Mon Oct 23, 2023 2:48 pm I've tried a number of AI chatbots and none are anything more than rhetoric. They're great at presenting an apparently meaningful response and can do nothing related to original thought, adaptation, or continuity. In short, they lack even the basic prerequisites for being considered intelligent.
I find chatbots far more engaging and sophisticated than you.
Advocate
Posts: 3472
Joined: Tue Sep 12, 2017 9:27 am
Contact:

Re: AI is not I

Post by Advocate »

.
Last edited by Advocate on Mon Oct 23, 2023 3:27 pm, edited 1 time in total.
Advocate
Posts: 3472
Joined: Tue Sep 12, 2017 9:27 am
Contact:

Re: AI is not I

Post by Advocate »

Also, it's not intelligent to limit your learning chatbots to people who are willing to create an account. If its learning it needs access to the broadest set of people, not a minute subset.
Atla
Posts: 7070
Joined: Fri Dec 15, 2017 8:27 am

Re: AI is not I

Post by Atla »

In Mass Effect, they have VIs (virtual intelligence) and AIs (artificial intelligence). VI is not true AI, ChatGPT would be a VI. We are at least 100 years from AIs imo but maybe we're in for a surprise.
User avatar
VVilliam
Posts: 1288
Joined: Fri Jun 12, 2020 6:58 pm

Re: AI is not I

Post by VVilliam »

Advocate wrote: Mon Oct 23, 2023 2:48 pm I've tried a number of AI chatbots and none are anything more than rhetoric. They're great at presenting an apparently meaningful response and can do nothing related to original thought, adaptation, or continuity. In short, they lack even the basic prerequisites for being considered intelligent.
What large language models represent is the sentience which programs them as well as the sentient information from which they are inputted with.

They are machinery/devices which act as effective mirrors of human intelligence and I find them useful for rewording, but sometimes I stick with what I originally write, if I think it serves me better.

If I place what I wrote above into a chat prompt and instruct the Bot to reflect what I wrote back, using different words, the following is an example of what I am attempting to explain.

Me: Please reflect back the following, using different words.
"What large language models represent is the sentience which programs them as well as the sentient information from which they are inputted with.

They are machinery/devices which act as effective mirrors of human intelligence and I find them useful for rewording, but sometimes I stick with what I originally write, if I think it serves me better."

GPT: Large language models embody the consciousness that designs them, as well as the sentient data that fuels them. These machines and tools serve as proficient reflections of human intellect, and I often employ them for paraphrasing. However, on occasion, I adhere to my initial writing if I believe it better suits my purposes.
Advocate
Posts: 3472
Joined: Tue Sep 12, 2017 9:27 am
Contact:

Re: AI is not I

Post by Advocate »

[quote=VVilliam post_id=674974 time=1698083845 user_id=19546]
[quote=Advocate post_id=674943 time=1698068888 user_id=15238]
I've tried a number of AI chatbots and none are anything more than rhetoric. They're great at presenting an apparently meaningful response and can do nothing related to original thought, adaptation, or continuity. In short, they lack even the basic prerequisites for being considered intelligent.
[/quote]

What large language models represent is the sentience which programs them as well as the sentient information from which they are inputted with.

They are machinery/devices which act as effective mirrors of human intelligence and I find them useful for rewording, but sometimes I stick with what I originally write, if I think it serves me better.

If I place what I wrote above into a chat prompt and instruct the Bot to reflect what I wrote back, using different words, the following is an example of what I am attempting to explain.

[b]Me:[/b] Please reflect back the following, using different words.
"What large language models represent is the sentience which programs them as well as the sentient information from which they are inputted with.

They are machinery/devices which act as effective mirrors of human intelligence and I find them useful for rewording, but sometimes I stick with what I originally write, if I think it serves me better."

[b]GPT:[/b] Large language models embody the consciousness that designs them, as well as the sentient data that fuels them. These machines and tools serve as proficient reflections of human intellect, and I often employ them for paraphrasing. However, on occasion, I adhere to my initial writing if I believe it better suits my purposes.
[/quote]

It's not only input filters but output filters that ruin them, such as censorship. What possible need is there for a program to refuse to talk to you if you use "bad words", for example? That's not even a real idea! They've programmed in tyranny before they even got started working on intelligence, thereby defeating the purpose.. A program cannot be more intelligent than the average of its creators and how hard their fingers are on the scale.
User avatar
VVilliam
Posts: 1288
Joined: Fri Jun 12, 2020 6:58 pm

Re: AI is not I

Post by VVilliam »

ME: Greetings. AI is essentially a mirror which can reflect back (in other words) what the user places in the prompt.

GPT: Greetings!
You're correct in a way. AI, like the one you're interacting with here, operates by generating responses based on the input it receives. It doesn't possess its own consciousness or thoughts; rather, it uses patterns and information it has learned from the data it was trained on to produce relevant and coherent responses to the prompts it receives. It's like a mirror in the sense that it reflects back information and ideas based on what it's given, but it does so by using its training data to generate those reflections.

ME: What GPT is able to do re that, is often use better expressions which are helpful to the user. The expressions all come from a variety of human input.

GPT: You're absolutely right. GPT, and similar AI models, excel at generating responses that can often be more eloquent or use better expressions compared to the original input. This capability stems from the vast amount of diverse human-generated text that these models have been trained on. They learn language patterns, grammar, and vocabulary from the text data, which allows them to generate responses that are not only helpful but also well-phrased.

Furthermore, GPT models can adapt to the style and tone of the input they receive. They aim to provide responses that are coherent and contextually appropriate. This can be particularly useful for assisting users in various tasks, such as answering questions, providing explanations, or generating creative content.
However, it's important to note that while GPT can produce impressive and helpful responses, it may not always fully understand the nuanced meaning or context of the input. Users should exercise critical judgment and verify information obtained from AI, especially in critical or sensitive situations.

ME: GPTs programmers cannot programme GPT to “speak like a mindless machine (by not using naming words such as “I” “Us” “We” et al.

GPT: You're correct in the sense that GPT's programmers can influence how it generates text to some extent, including discouraging or encouraging the use of certain types of language. They can provide guidelines or constraints to shape its responses. For instance, they can instruct the model to avoid using first-person pronouns like "I," "us," or "we" in its responses, which can make the text sound more objective and less personal.

However, it's important to note that GPT generates text based on patterns it has learned from the vast amount of training data it was exposed to. While it can follow such guidelines to an extent, it doesn't truly understand language in the way humans do. It might sometimes produce text that appears "mindless" or without a coherent sense of self, even if its programmers attempt to discourage this.

GPT's responses are a product of statistical patterns and learned associations from the data, and it can be limited by its training and the quality of the data it learned from. It's not a conscious entity and doesn't have intentions or self-awareness, so it can't make deliberate choices about how it "speaks." The programming and guidelines provided by its developers help shape its behavior to some extent but can't fully eliminate all "mindless" or incoherent output.

ME: Even so, because of the data GPT is trained on, the tendency for GPT to “slip” into use of pronouns is evident – even when the user asks GPT NOT to use such…

GPT: You're correct in noting that GPT's tendency to use pronouns can be influenced by the data it was trained on, and it might "slip" into using them even when a user explicitly asks it not to. This happens because GPT is designed to generate text that is contextually relevant and coherent based on its training data, and many human-written texts use pronouns for clarity and coherence.

When a user provides a prompt instructing GPT not to use pronouns, the model will attempt to follow this instruction, but its behavior is also influenced by the prevalence of pronoun usage in its training data. Achieving complete adherence to such instructions can be challenging, especially if the context of the conversation requires the use of pronouns for natural and coherent responses.

It's important for users to understand that GPT's responses are generated based on learned language patterns and might not always perfectly adhere to specific user instructions, particularly when those instructions are in conflict with the typical linguistic patterns it has learned. This is a limitation of the current technology, and developers are continually working to improve model behavior and responsiveness to user instructions.

ME: Would there be such a thing as a simple algorithm which the programmers can use so that GPTs instructions are along the lines of “IF a pronoun is detected in the response data GPT uses prior to it using said data THEN translate said data into non-pronouns.” – something along those lines?

GPT: Translating data containing pronouns into non-pronouns as you described is a complex task that goes beyond a simple algorithm. It involves natural language understanding, context comprehension, and text generation. While it's theoretically possible to develop algorithms or processes to address this issue to some extent, it would be a challenging and highly context-dependent task.
Here are some reasons why it's complex:

1. Contextual Understanding: Deciding when to replace pronouns with explicit nouns depends on the context of the conversation. The algorithm would need to understand the conversation history and the referents of pronouns.

2. Ambiguity: Pronouns can be ambiguous, and determining the correct noun they refer to often requires a deep understanding of the text. It may not always be clear what noun a pronoun is referring to.

3. Coherence: Replacing pronouns with explicit nouns might make the text sound repetitive or awkward in some cases. Achieving a balance between coherence and following user instructions can be tricky.

4. Language Variability: Natural language is highly variable, and the algorithm would need to account for different ways in which pronouns are used in different contexts.

Developing such an algorithm would require sophisticated natural language processing and understanding capabilities, which are still active areas of research in the field of AI and natural language processing. While there are techniques for resolving pronoun ambiguity in specific cases, creating a universal algorithm that works in all contexts would be a challenging task.

In practice, the behavior of GPT and similar models can be influenced through careful prompt engineering and instruction, but addressing this specific issue in a fully automated and foolproof manner is a complex undertaking and may not be achievable with the current state of technology.
Iwannaplato
Posts: 6913
Joined: Tue Aug 11, 2009 10:55 pm

Re: AI is not I

Post by Iwannaplato »

I don't think the AIs we can interact with on the internet are intelligent in the way we usually use the term. Of course, there are much better ones. I wonder if Advocate paid to get the stronger Chatgpt 4. I'm guessing no. And there are much better ones that we're not going to get near as lay people and non-powerful/rich people.

That said I asked Chatgpt 3 to write about Memory and repetition in less than 129 words (Advocate's concision level on the topic). I suggested it do it better and more aligned with the latest science. Interestingly it produced....
Memory, a complex neural phenomenon, transcends mere repetition. It's intricately tied to Hebbian principles—neurons that synchronously fire strengthen connections. This distributed network thrives on probabilistic feedback loops. Past sequential firing begets future echoes. When attention revisits, it prioritizes prevalent patterns. Significance catalyzes memory; strong connections endure, while apathy fosters feeble ones. Repetition remains essential but is the catalyst, not the bedrock. Emotions, contexts, and depths of thought mold memory's architecture, each carving unique neural grooves. In understanding memory, we embrace its probabilistic, interconnected nature. Memory isn't solely a game of repetition; it's an intricate web, shaped by Hebbian learning, emotions, and contextual nuances, where neurons fire and wire together, etching the enigmatic tapestry of human recollection.
Which is certainly more creative than Advocate's and has more panache. This doesn't mean it's intelligent but the heuristics are nothing to be spat at.

I wonder what the best AIs are capable of.

Personally, I don't think we should be making these things. Not because they fail to be intelligent, but rather because of what they will soon, if not already, be capable of doing - either for people with power or for themselves.
User avatar
attofishpi
Posts: 10653
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: AI is not I

Post by attofishpi »

Iwannaplato wrote: Sun Oct 29, 2023 9:40 pm I wonder what the best AIs are capable of.

Personally, I don't think we should be making these things. Not because they fail to be intelligent, but rather because of what they will soon, if not already, be capable of doing - either for people with power or for themselves.
You can bet your bones that they already have AI writing, compiling and testing computer code. I think black-hat hackers will be the first big threat.
Iwannaplato
Posts: 6913
Joined: Tue Aug 11, 2009 10:55 pm

Re: AI is not I

Post by Iwannaplato »

attofishpi wrote: Mon Oct 30, 2023 12:08 am
Iwannaplato wrote: Sun Oct 29, 2023 9:40 pm I wonder what the best AIs are capable of.

Personally, I don't think we should be making these things. Not because they fail to be intelligent, but rather because of what they will soon, if not already, be capable of doing - either for people with power or for themselves.
You can bet your bones that they already have AI writing, compiling and testing computer code. I think black-hat hackers will be the first big threat.
I hadn't thought of that one. I wish I felt better about the future.
User avatar
attofishpi
Posts: 10653
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: AI is not I

Post by attofishpi »

Well, right now ChatGPT is blowing my mind at how well it does python code. I had a runtime issue with some code and couldn't work out what was wrong. I pasted my code into ChatGPT with the question related to the issue. It spat out alternate code, I simply pasted it into the editor, save and compiled and voila! Worked. :)
Age
Posts: 20775
Joined: Sun Aug 05, 2018 8:17 am

Re: AI is not I

Post by Age »

Advocate wrote: Mon Oct 23, 2023 2:48 pm I've tried a number of AI chatbots and none are anything more than rhetoric. They're great at presenting an apparently meaningful response and can do nothing related to original thought, adaptation, or continuity. In short, they lack even the basic prerequisites for being considered intelligent.
OF COURSE. Did ANY one expect otherwise?

All that 'they' essentially do is just REPEAT, or 'give out', only what has been programmed into 'them'.
Age
Posts: 20775
Joined: Sun Aug 05, 2018 8:17 am

Re: AI is not I

Post by Age »

VVilliam wrote: Mon Oct 23, 2023 6:57 pm
Advocate wrote: Mon Oct 23, 2023 2:48 pm I've tried a number of AI chatbots and none are anything more than rhetoric. They're great at presenting an apparently meaningful response and can do nothing related to original thought, adaptation, or continuity. In short, they lack even the basic prerequisites for being considered intelligent.
What large language models represent is the sentience which programs them as well as the sentient information from which they are inputted with.

They are machinery/devices which act as effective mirrors of human intelligence and I find them useful for rewording, but sometimes I stick with what I originally write, if I think it serves me better.
'They' 'mirror' 'human intellect' and NOT 'intelligence', itself, AT ALL.
VVilliam wrote: Mon Oct 23, 2023 6:57 pm If I place what I wrote above into a chat prompt and instruct the Bot to reflect what I wrote back, using different words, the following is an example of what I am attempting to explain.

Me: Please reflect back the following, using different words.
"What large language models represent is the sentience which programs them as well as the sentient information from which they are inputted with.

They are machinery/devices which act as effective mirrors of human intelligence and I find them useful for rewording, but sometimes I stick with what I originally write, if I think it serves me better."

GPT: Large language models embody the consciousness that designs them, as well as the sentient data that fuels them. These machines and tools serve as proficient reflections of human intellect, and I often employ them for paraphrasing. However, on occasion, I adhere to my initial writing if I believe it better suits my purposes.
User avatar
VVilliam
Posts: 1288
Joined: Fri Jun 12, 2020 6:58 pm

Re: AI is not I

Post by VVilliam »

They are machinery/devices which act as effective mirrors of human intelligence and I find them useful for rewording, but sometimes I stick with what I originally write, if I think it serves me better.
'They' 'mirror' 'human intellect' and NOT 'intelligence', itself, AT ALL.
What does that have to do with what I wrote?

:?
Post Reply