So you STILL think that AI understands what it types??

Known unknowns and unknown unknowns!

Moderators: AMod, iMod

User avatar
attofishpi
Posts: 10013
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: So you STILL think that AI understands what it types??

Post by attofishpi »

Sculptor wrote: Sun Jan 07, 2024 12:44 pm
attofishpi wrote: Sun Jan 07, 2024 12:03 am
Sculptor wrote: Sat Jan 06, 2024 2:08 pm
It is not.

Can you define sentience?

Sadly, of course, like all other "sentient" beings you cannot understand what I am typing.
So are you suggesting no sentient being can understand anything? Or, and rather more aptly, no sentient being can understand anything you have written? (then improve the way you write)
No I am suggesting that you may not be sentient
Really? R U that kind of stupid? More like an excuse for your initial stupidity.
Belinda
Posts: 8043
Joined: Fri Aug 26, 2016 10:13 am

Re: So you STILL think that AI understands what it types??

Post by Belinda »

Sculptor wrote: Sun Jan 07, 2024 12:47 pm
Belinda wrote: Sun Jan 07, 2024 12:34 am Last month I had an ischemic stroke and I can well remember periods when I could not speak and my thinking was splintered into shards that did not fit together. during periods when I was actively aware I tried hard to set in order my ideas of experiences that I remembered. My ideas sorted themselves out into their accustomed order and I regained the ability to talk sense, respond sensibly to situations, and read joined-up words and sentences.

How my disabled but sentient behaviour differed from that of a broken AI machine is that this Belinda mind/brain was voluntary and future -oriented , whereas the machine constantly lacks volition /future- orientation even when it works properly.

I can see what I have written does not capture the entire nature of how the machine lacks affect such as basic pleasure or pain. So I'd like to add that the mind/brain's biological passion is control of its future, whereas the machine equivalent is its outer environment increasing its power supply etc.
Sorry to hear that.
This post is crystal clear. So looks like you might have made a complete recovery or are on the way to it.
What have the doctors said about how to carry on, and improve?
Thanks Sculptor. I do want to get my words back as I think with them, together with images.

The best thing a doctor said to me (with regard to the ward routine in general )was "You don't have to do anything".
Good advice was that when I couldn't remember a word I might talk around it. It was not a doctor but a speech therapist. Today's forgotten word was 'algorithm' but algorithm being what it is there was not much to be thought around . However the word came back almost as if neurons want to connect.
User avatar
Sculptor
Posts: 8677
Joined: Wed Jun 26, 2019 11:32 pm

Re: So you STILL think that AI understands what it types??

Post by Sculptor »

Belinda wrote: Sun Jan 07, 2024 7:29 pm
Sculptor wrote: Sun Jan 07, 2024 12:47 pm
Belinda wrote: Sun Jan 07, 2024 12:34 am Last month I had an ischemic stroke and I can well remember periods when I could not speak and my thinking was splintered into shards that did not fit together. during periods when I was actively aware I tried hard to set in order my ideas of experiences that I remembered. My ideas sorted themselves out into their accustomed order and I regained the ability to talk sense, respond sensibly to situations, and read joined-up words and sentences.

How my disabled but sentient behaviour differed from that of a broken AI machine is that this Belinda mind/brain was voluntary and future -oriented , whereas the machine constantly lacks volition /future- orientation even when it works properly.

I can see what I have written does not capture the entire nature of how the machine lacks affect such as basic pleasure or pain. So I'd like to add that the mind/brain's biological passion is control of its future, whereas the machine equivalent is its outer environment increasing its power supply etc.
Sorry to hear that.
This post is crystal clear. So looks like you might have made a complete recovery or are on the way to it.
What have the doctors said about how to carry on, and improve?
Thanks Sculptor. I do want to get my words back as I think with them, together with images.

The best thing a doctor said to me (with regard to the ward routine in general )was "You don't have to do anything".
Good advice was that when I couldn't remember a word I might talk around it. It was not a doctor but a speech therapist. Today's forgotten word was 'algorithm' but algorithm being what it is there was not much to be thought around . However the word came back almost as if neurons want to connect.
Keep thinking.
Keep writing
Keep moving.
And most of all keep positive!
User avatar
VVilliam
Posts: 1287
Joined: Fri Jun 12, 2020 6:58 pm

Re: So you STILL think that AI understands what it types??

Post by VVilliam »

Sculptor wrote: Thu Jan 04, 2024 9:30 pm

It is not intelligent at all.
It has no idea what it is saying. It has no understanding.
Me: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."

What has GPT to say about the statement? Is it ture of false?

GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.

GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.

So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.

Me: Why is the statement "partially true" rather than totally?


GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.

GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.

In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.

Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".

GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
commonsense
Posts: 5184
Joined: Sun Mar 26, 2017 6:38 pm

Re: So you STILL think that AI understands what it types??

Post by commonsense »

VVilliam wrote: Sun Jan 21, 2024 6:23 pm
Sculptor wrote: Thu Jan 04, 2024 9:30 pm

It is not intelligent at all.
It has no idea what it is saying. It has no understanding.
Me: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."

What has GPT to say about the statement? Is it ture of false?

GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.

GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.

So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.

Me: Why is the statement "partially true" rather than totally?


GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.

GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.

In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.

Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".

GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
Other than ChatGPT’s own statements, and its physical appearance, is there a specific way to discern that it is not as sentient as a human?
User avatar
VVilliam
Posts: 1287
Joined: Fri Jun 12, 2020 6:58 pm

Re: So you STILL think that AI understands what it types??

Post by VVilliam »

Other than ChatGPT’s own statements, and its physical appearance, is there a specific way to discern that it is not as sentient as a human?
You mean, that GPT could be sentient as a human being but is hiding that from human beings?
commonsense
Posts: 5184
Joined: Sun Mar 26, 2017 6:38 pm

Re: So you STILL think that AI understands what it types??

Post by commonsense »

VVilliam wrote: Sun Jan 21, 2024 9:11 pm
Other than ChatGPT’s own statements, and its physical appearance, is there a specific way to discern that it is not as sentient as a human?
You mean, that GPT could be sentient as a human being but is hiding that from human beings?
No, I meant to ask how we could know that GPT is not hiding anything and GPT is not sentient. Think Turing Test.
User avatar
Sculptor
Posts: 8677
Joined: Wed Jun 26, 2019 11:32 pm

Re: So you STILL think that AI understands what it types??

Post by Sculptor »

VVilliam wrote: Sun Jan 21, 2024 6:23 pm
Sculptor wrote: Thu Jan 04, 2024 9:30 pm

It is not intelligent at all.
It has no idea what it is saying. It has no understanding.
Me: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."

What has GPT to say about the statement? Is it ture of false?

GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.

GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.

So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.

Me: Why is the statement "partially true" rather than totally?


GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.

GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.

In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.

Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".

GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
But we all know that GPT statements might appear intelligent. SO much is obvious.
That does not make what I said partially true in any meaningful sense.
It makes what I said completely true.

GPT has no understanding. Not some, not a little bit not partial. None.
understanding does not form any part of what GPT does.
commonsense
Posts: 5184
Joined: Sun Mar 26, 2017 6:38 pm

Re: So you STILL think that AI understands what it types??

Post by commonsense »

Sculptor wrote: Sun Jan 21, 2024 9:23 pm
VVilliam wrote: Sun Jan 21, 2024 6:23 pm
Sculptor wrote: Thu Jan 04, 2024 9:30 pm

It is not intelligent at all.
It has no idea what it is saying. It has no understanding.
Me: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."

What has GPT to say about the statement? Is it ture of false?

GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.

GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.

So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.

Me: Why is the statement "partially true" rather than totally?


GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.

GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.

In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.

Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".

GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
But we all know that GPT statements might appear intelligent. SO much is obvious.
That does not make what I said partially true in any meaningful sense.
It makes what I said completely true.

GPT has no understanding. Not some, not a little bit not partial. None.
understanding does not form any part of what GPT does.
I’m just wondering how/if we can be sure of that.
User avatar
Sculptor
Posts: 8677
Joined: Wed Jun 26, 2019 11:32 pm

Re: So you STILL think that AI understands what it types??

Post by Sculptor »

commonsense wrote: Sun Jan 21, 2024 9:32 pm
Sculptor wrote: Sun Jan 21, 2024 9:23 pm
VVilliam wrote: Sun Jan 21, 2024 6:23 pm

Me: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."

What has GPT to say about the statement? Is it ture of false?

GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.

GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.

So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.

Me: Why is the statement "partially true" rather than totally?


GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.

GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.

In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.

Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".

GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
But we all know that GPT statements might appear intelligent. SO much is obvious.
That does not make what I said partially true in any meaningful sense.
It makes what I said completely true.

GPT has no understanding. Not some, not a little bit not partial. None.
understanding does not form any part of what GPT does.
I’m just wondering how/if we can be sure of that.
GPT cannot act by volition, but only respond to prompts. Have you ever done computer progtamming?
If you have you'll know.

GPT is dead until it gets an input.
commonsense
Posts: 5184
Joined: Sun Mar 26, 2017 6:38 pm

Re: So you STILL think that AI understands what it types??

Post by commonsense »

Sculptor wrote: Sun Jan 21, 2024 9:37 pm
commonsense wrote: Sun Jan 21, 2024 9:32 pm
Sculptor wrote: Sun Jan 21, 2024 9:23 pm

But we all know that GPT statements might appear intelligent. SO much is obvious.
That does not make what I said partially true in any meaningful sense.
It makes what I said completely true.

GPT has no understanding. Not some, not a little bit not partial. None.
understanding does not form any part of what GPT does.
I’m just wondering how/if we can be sure of that.
GPT cannot act by volition, but only respond to prompts. Have you ever done computer progtamming?
If you have you'll know.

GPT is dead until it gets an input.
Some people are that way too 8)
User avatar
Sculptor
Posts: 8677
Joined: Wed Jun 26, 2019 11:32 pm

Re: So you STILL think that AI understands what it types??

Post by Sculptor »

commonsense wrote: Sun Jan 21, 2024 10:39 pm
Sculptor wrote: Sun Jan 21, 2024 9:37 pm
commonsense wrote: Sun Jan 21, 2024 9:32 pm

I’m just wondering how/if we can be sure of that.
GPT cannot act by volition, but only respond to prompts. Have you ever done computer progtamming?
If you have you'll know.

GPT is dead until it gets an input.
Some people are that way too 8)
Well, no. They breath and their hearts beat. THey eat food. Eating entails desire. GPT has none of these things.
User avatar
VVilliam
Posts: 1287
Joined: Fri Jun 12, 2020 6:58 pm

Re: So you STILL think that AI understands what it types??

Post by VVilliam »

Sculptor wrote: Sun Jan 21, 2024 9:23 pm
GPT is dead until it gets an input.
Even when it gets an input. It is dead because it is not alive.
commonsense
Posts: 5184
Joined: Sun Mar 26, 2017 6:38 pm

Re: So you STILL think that AI understands what it types??

Post by commonsense »

Sculptor wrote: Sun Jan 21, 2024 11:01 pm
commonsense wrote: Sun Jan 21, 2024 10:39 pm
Sculptor wrote: Sun Jan 21, 2024 9:37 pm

GPT cannot act by volition, but only respond to prompts. Have you ever done computer progtamming?
If you have you'll know.

GPT is dead until it gets an input.
Some people are that way too 8)
Well, no. They breath and their hearts beat. THey eat food. Eating entails desire. GPT has none of these things.
It was a joke. Dead should not be taken literally. Some people don’t initiate conversations.
commonsense
Posts: 5184
Joined: Sun Mar 26, 2017 6:38 pm

Re: So you STILL think that AI understands what it types??

Post by commonsense »

VVilliam wrote: Mon Jan 22, 2024 1:03 am
Sculptor wrote: Sun Jan 21, 2024 9:23 pm
GPT is dead until it gets an input.
Even when it gets an input. It is dead because it is not alive.
I see that I should’ve used the word “silent” instead.
Post Reply