Really? R U that kind of stupid? More like an excuse for your initial stupidity.Sculptor wrote: ↑Sun Jan 07, 2024 12:44 pmNo I am suggesting that you may not be sentientattofishpi wrote: ↑Sun Jan 07, 2024 12:03 amSo are you suggesting no sentient being can understand anything? Or, and rather more aptly, no sentient being can understand anything you have written? (then improve the way you write)
So you STILL think that AI understands what it types??
- attofishpi
- Posts: 10596
- Joined: Tue Aug 16, 2011 8:10 am
- Location: Orion Spur
- Contact:
Re: So you STILL think that AI understands what it types??
Re: So you STILL think that AI understands what it types??
Thanks Sculptor. I do want to get my words back as I think with them, together with images.Sculptor wrote: ↑Sun Jan 07, 2024 12:47 pmSorry to hear that.Belinda wrote: ↑Sun Jan 07, 2024 12:34 am Last month I had an ischemic stroke and I can well remember periods when I could not speak and my thinking was splintered into shards that did not fit together. during periods when I was actively aware I tried hard to set in order my ideas of experiences that I remembered. My ideas sorted themselves out into their accustomed order and I regained the ability to talk sense, respond sensibly to situations, and read joined-up words and sentences.
How my disabled but sentient behaviour differed from that of a broken AI machine is that this Belinda mind/brain was voluntary and future -oriented , whereas the machine constantly lacks volition /future- orientation even when it works properly.
I can see what I have written does not capture the entire nature of how the machine lacks affect such as basic pleasure or pain. So I'd like to add that the mind/brain's biological passion is control of its future, whereas the machine equivalent is its outer environment increasing its power supply etc.
This post is crystal clear. So looks like you might have made a complete recovery or are on the way to it.
What have the doctors said about how to carry on, and improve?
The best thing a doctor said to me (with regard to the ward routine in general )was "You don't have to do anything".
Good advice was that when I couldn't remember a word I might talk around it. It was not a doctor but a speech therapist. Today's forgotten word was 'algorithm' but algorithm being what it is there was not much to be thought around . However the word came back almost as if neurons want to connect.
Re: So you STILL think that AI understands what it types??
Keep thinking.Belinda wrote: ↑Sun Jan 07, 2024 7:29 pmThanks Sculptor. I do want to get my words back as I think with them, together with images.Sculptor wrote: ↑Sun Jan 07, 2024 12:47 pmSorry to hear that.Belinda wrote: ↑Sun Jan 07, 2024 12:34 am Last month I had an ischemic stroke and I can well remember periods when I could not speak and my thinking was splintered into shards that did not fit together. during periods when I was actively aware I tried hard to set in order my ideas of experiences that I remembered. My ideas sorted themselves out into their accustomed order and I regained the ability to talk sense, respond sensibly to situations, and read joined-up words and sentences.
How my disabled but sentient behaviour differed from that of a broken AI machine is that this Belinda mind/brain was voluntary and future -oriented , whereas the machine constantly lacks volition /future- orientation even when it works properly.
I can see what I have written does not capture the entire nature of how the machine lacks affect such as basic pleasure or pain. So I'd like to add that the mind/brain's biological passion is control of its future, whereas the machine equivalent is its outer environment increasing its power supply etc.
This post is crystal clear. So looks like you might have made a complete recovery or are on the way to it.
What have the doctors said about how to carry on, and improve?
The best thing a doctor said to me (with regard to the ward routine in general )was "You don't have to do anything".
Good advice was that when I couldn't remember a word I might talk around it. It was not a doctor but a speech therapist. Today's forgotten word was 'algorithm' but algorithm being what it is there was not much to be thought around . However the word came back almost as if neurons want to connect.
Keep writing
Keep moving.
And most of all keep positive!
Re: So you STILL think that AI understands what it types??
Me: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."
What has GPT to say about the statement? Is it ture of false?
GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.
GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.
So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.
Me: Why is the statement "partially true" rather than totally?
GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.
GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.
In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.
Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".
GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
-
- Posts: 5259
- Joined: Sun Mar 26, 2017 6:38 pm
Re: So you STILL think that AI understands what it types??
Other than ChatGPT’s own statements, and its physical appearance, is there a specific way to discern that it is not as sentient as a human?VVilliam wrote: ↑Sun Jan 21, 2024 6:23 pmMe: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."
What has GPT to say about the statement? Is it ture of false?
GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.
GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.
So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.
Me: Why is the statement "partially true" rather than totally?
GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.
GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.
In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.
Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".
GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
Re: So you STILL think that AI understands what it types??
You mean, that GPT could be sentient as a human being but is hiding that from human beings?Other than ChatGPT’s own statements, and its physical appearance, is there a specific way to discern that it is not as sentient as a human?
-
- Posts: 5259
- Joined: Sun Mar 26, 2017 6:38 pm
Re: So you STILL think that AI understands what it types??
No, I meant to ask how we could know that GPT is not hiding anything and GPT is not sentient. Think Turing Test.
Re: So you STILL think that AI understands what it types??
But we all know that GPT statements might appear intelligent. SO much is obvious.VVilliam wrote: ↑Sun Jan 21, 2024 6:23 pmMe: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."
What has GPT to say about the statement? Is it ture of false?
GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.
GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.
So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.
Me: Why is the statement "partially true" rather than totally?
GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.
GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.
In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.
Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".
GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
That does not make what I said partially true in any meaningful sense.
It makes what I said completely true.
GPT has no understanding. Not some, not a little bit not partial. None.
understanding does not form any part of what GPT does.
-
- Posts: 5259
- Joined: Sun Mar 26, 2017 6:38 pm
Re: So you STILL think that AI understands what it types??
I’m just wondering how/if we can be sure of that.Sculptor wrote: ↑Sun Jan 21, 2024 9:23 pmBut we all know that GPT statements might appear intelligent. SO much is obvious.VVilliam wrote: ↑Sun Jan 21, 2024 6:23 pmMe: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."
What has GPT to say about the statement? Is it ture of false?
GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.
GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.
So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.
Me: Why is the statement "partially true" rather than totally?
GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.
GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.
In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.
Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".
GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
That does not make what I said partially true in any meaningful sense.
It makes what I said completely true.
GPT has no understanding. Not some, not a little bit not partial. None.
understanding does not form any part of what GPT does.
Re: So you STILL think that AI understands what it types??
GPT cannot act by volition, but only respond to prompts. Have you ever done computer progtamming?commonsense wrote: ↑Sun Jan 21, 2024 9:32 pmI’m just wondering how/if we can be sure of that.Sculptor wrote: ↑Sun Jan 21, 2024 9:23 pmBut we all know that GPT statements might appear intelligent. SO much is obvious.VVilliam wrote: ↑Sun Jan 21, 2024 6:23 pm
Me: This was written about Chat GPT LLS:
"It is not intelligent at all.
It has no idea what it is saying. It has no understanding."
What has GPT to say about the statement? Is it ture of false?
GPT: GPT (Generative Pre-trained Transformer) models, including ChatGPT, are designed to generate human-like text based on the input they receive. While they can generate coherent and contextually relevant responses, it's important to note that they don't possess true understanding or intelligence.
GPT models operate by predicting the next word in a sequence of text based on the context of the preceding words. They don't have a deep comprehension of the information they process, and their responses are generated based on patterns learned during training on a diverse range of internet text.
So, the statement is partially true in the sense that GPT models lack true understanding and intelligence. They can produce contextually appropriate responses, but this is based on patterns learned from data rather than a genuine comprehension of the meaning behind the words.
Me: Why is the statement "partially true" rather than totally?
GPT: The statement is considered partially true because while GPT models like ChatGPT are indeed not truly intelligent in the way humans are, and they lack genuine understanding of the information, they can still generate text that appears coherent and contextually relevant.
GPT models are trained on vast amounts of diverse data and can capture patterns in language, allowing them to produce responses that are contextually appropriate and syntactically correct. However, this is a result of statistical associations and patterns rather than a deep comprehension of the meaning of the text.
In essence, GPT models exhibit a form of artificial intelligence in their ability to generate human-like text, but they do not possess true understanding or consciousness. The "partial" truth lies in acknowledging their ability to mimic intelligence in specific tasks while recognizing the limitations in terms of genuine comprehension and awareness.
Me. So the statement is considered "partially true" by those who are able to consider...it is not GPT which is doing the considering about itself. GPT is simply reflecting back in type, that it is the human input which "does the considering".
GPT: Exactly. The consideration and interpretation of statements are performed by individuals, not by the GPT model itself. GPT generates responses based on patterns learned from data, but it doesn't have the ability to independently evaluate or reflect on its own capabilities. When people say that the statement is "partially true," they are providing an assessment based on their understanding of the limitations and capabilities of GPT models. The model itself doesn't engage in self-awareness, reasoning, or assessment of statements about its own abilities. It is a tool created by humans and operates based on the data it was trained on.
That does not make what I said partially true in any meaningful sense.
It makes what I said completely true.
GPT has no understanding. Not some, not a little bit not partial. None.
understanding does not form any part of what GPT does.
If you have you'll know.
GPT is dead until it gets an input.
-
- Posts: 5259
- Joined: Sun Mar 26, 2017 6:38 pm
Re: So you STILL think that AI understands what it types??
Some people are that way tooSculptor wrote: ↑Sun Jan 21, 2024 9:37 pmGPT cannot act by volition, but only respond to prompts. Have you ever done computer progtamming?commonsense wrote: ↑Sun Jan 21, 2024 9:32 pmI’m just wondering how/if we can be sure of that.Sculptor wrote: ↑Sun Jan 21, 2024 9:23 pm
But we all know that GPT statements might appear intelligent. SO much is obvious.
That does not make what I said partially true in any meaningful sense.
It makes what I said completely true.
GPT has no understanding. Not some, not a little bit not partial. None.
understanding does not form any part of what GPT does.
If you have you'll know.
GPT is dead until it gets an input.
Re: So you STILL think that AI understands what it types??
Well, no. They breath and their hearts beat. THey eat food. Eating entails desire. GPT has none of these things.commonsense wrote: ↑Sun Jan 21, 2024 10:39 pmSome people are that way too
-
- Posts: 5259
- Joined: Sun Mar 26, 2017 6:38 pm
Re: So you STILL think that AI understands what it types??
It was a joke. Dead should not be taken literally. Some people don’t initiate conversations.Sculptor wrote: ↑Sun Jan 21, 2024 11:01 pmWell, no. They breath and their hearts beat. THey eat food. Eating entails desire. GPT has none of these things.
-
- Posts: 5259
- Joined: Sun Mar 26, 2017 6:38 pm