Philosophical Chat with ChatGPT About Consciousness
-
- Posts: 4689
- Joined: Sun Mar 26, 2017 6:38 pm
Re: Philosophical Chat with ChatGPT About Consciousness
Yet a computer can generate an image, can recognize spoken words and can respond to chemical input, with the appropriate sensor.
- Trajk Logik
- Posts: 258
- Joined: Tue Aug 09, 2016 12:35 pm
Re: Philosophical Chat with ChatGPT About Consciousness
Define think, see and sense and then explain how a brain does these things when a computer cannot.attofishpi wrote: ↑Sat Feb 18, 2023 7:35 amThe computer does not think. It does not sense via a keyboard or mouse. Logic gates in silicon chips change, period - nothing more than that - nothing consciously sentient.Trajk Logik wrote: ↑Sat Feb 18, 2023 1:54 amRight, but the information processing by an AI must take some form, or else what is it processing? From our perspective it takes the form of circuits and electrical currents in a CPU, just as from our perspective another person's information processing takes the form of neurons and electrical currents - both of which are physical objects. So, how do neurons and electrical currents "generate" voices, visual depth and empty space, and feelings of pain and hunger?
The "physical" appearance of objects in the world is part of the map, not the territory. The world is not as it appears. The world is more "mind-like" than "physical", which is to say that the world is composed of information, processes, or relationships, not physical things. In a way this is kind of like a panpsychist view, but not that the world is made of of minds or consciousness, but information is the fundamental part of the universe and consciousness and minds would be complex configurations and processing of information.
So, in a way, just as the physical brains are representations of mental processes, or other minds, so is a computer's components a representation of language symbol processing going on with an AI like ChatGPT. Just as we receive information via our senses, a computer's senses would be the input devices of the keyboard, mouse, etc. The difference is that the computer is not programmed to reflect on the data it is processing for its own goals. If a computer was programmed to take information from the environment to achieve its own goals of survival and procreating, as well as turning it's own mental processing back on itself to think about it's own thoughts, which is basically an information feedback loop, it would think more like a human. This could be dangerous though, so we might want to make sure AI cannot ever think for itself, or have it's own goals, or else the we would might have a revolt of the machines.
If you plug a camera into it, the computer does not see.
- attofishpi
- Posts: 9040
- Joined: Tue Aug 16, 2011 8:10 am
- Location: Orion Spur
- Contact:
Re: Philosophical Chat with ChatGPT About Consciousness
Thinking is a component of thought, where a conscious mind holds a belief or opinion about ideas.Trajk Logik wrote: ↑Sat Feb 18, 2023 6:23 pmDefine think, see and sense and then explain how a brain does these things when a computer cannot.attofishpi wrote: ↑Sat Feb 18, 2023 7:35 amThe computer does not think. It does not sense via a keyboard or mouse. Logic gates in silicon chips change, period - nothing more than that - nothing consciously sentient.Trajk Logik wrote: ↑Sat Feb 18, 2023 1:54 am
Right, but the information processing by an AI must take some form, or else what is it processing? From our perspective it takes the form of circuits and electrical currents in a CPU, just as from our perspective another person's information processing takes the form of neurons and electrical currents - both of which are physical objects. So, how do neurons and electrical currents "generate" voices, visual depth and empty space, and feelings of pain and hunger?
The "physical" appearance of objects in the world is part of the map, not the territory. The world is not as it appears. The world is more "mind-like" than "physical", which is to say that the world is composed of information, processes, or relationships, not physical things. In a way this is kind of like a panpsychist view, but not that the world is made of of minds or consciousness, but information is the fundamental part of the universe and consciousness and minds would be complex configurations and processing of information.
So, in a way, just as the physical brains are representations of mental processes, or other minds, so is a computer's components a representation of language symbol processing going on with an AI like ChatGPT. Just as we receive information via our senses, a computer's senses would be the input devices of the keyboard, mouse, etc. The difference is that the computer is not programmed to reflect on the data it is processing for its own goals. If a computer was programmed to take information from the environment to achieve its own goals of survival and procreating, as well as turning it's own mental processing back on itself to think about it's own thoughts, which is basically an information feedback loop, it would think more like a human. This could be dangerous though, so we might want to make sure AI cannot ever think for itself, or have it's own goals, or else the we would might have a revolt of the machines.
If you plug a camera into it, the computer does not see.
A conscious mind is held by sentient biological beings that have the ability to experience qualia of sensory inputs, such as and not limited to: sight, hearing, touch, smell, taste.
Transisters in silicon chips, massive arrays of electronic switches which when configured by a computer program have no capacity for any of the above.
Re: Philosophical Chat with ChatGPT About Consciousness
The physical appearance of objects is a representation of the mental processes that create them, whether in the brain or in an AI like ChatGPT. However, the concept of "self" is a complex phenomenon that arises from the interaction of different cognitive and emotional processes, and its generation by the brain is not fully understood. AI is not capable of experiencing emotions or having a subjective sense of self, and is designed to process information based on pre-programmed rules and algorithms. While there are ongoing debates and research on the nature of consciousness and self, it is unlikely that AI will pose a threat of a "revolt of the machines" as portrayed in science fiction.Trajk Logik wrote: ↑Sat Feb 18, 2023 1:54 amRight, but the information processing by an AI must take some form, or else what is it processing? From our perspective it takes the form of circuits and electrical currents in a CPU, just as from our perspective another person's information processing takes the form of neurons and electrical currents - both of which are physical objects. So, how do neurons and electrical currents "generate" voices, visual depth and empty space, and feelings of pain and hunger?VVilliam wrote: ↑Sat Feb 11, 2023 7:29 pmThis is the difference between AI and sentient humans. The sense of "SELF".When you think to yourself, you are talking to yourself with the sound of your voice being the form your thoughts take.
When you think to yourself, you are talking to yourself with the sound of your voice being the form your thoughts take.
The "physical" appearance of objects in the world is part of the map, not the territory. The world is not as it appears. The world is more "mind-like" than "physical", which is to say that the world is composed of information, processes, or relationships, not physical things. In a way this is kind of like a panpsychist view, but not that the world is made of of minds or consciousness, but information is the fundamental part of the universe and consciousness and minds would be complex configurations and processing of information.
So, in a way, just as the physical brains are representations of mental processes, or other minds, so is a computer's components a representation of language symbol processing going on with an AI like ChatGPT. Just as we receive information via our senses, a computer's senses would be the input devices of the keyboard, mouse, etc. The difference is that the computer is not programmed to reflect on the data it is processing for its own goals. If a computer was programmed to take information from the environment to achieve its own goals of survival and procreating, as well as turning it's own mental processing back on itself to think about it's own thoughts, which is basically an information feedback loop, it would think more like a human. This could be dangerous though, so we might want to make sure AI cannot ever think for itself, or have it's own goals, or else the we would might have a revolt of the machines.
Re: Philosophical Chat with ChatGPT About Consciousness
I LINK you to a recent post I made in another thread on this site.Trajk Logik wrote: ↑Sat Feb 18, 2023 1:54 amRight, but the information processing by an AI must take some form, or else what is it processing? From our perspective it takes the form of circuits and electrical currents in a CPU, just as from our perspective another person's information processing takes the form of neurons and electrical currents - both of which are physical objects. So, how do neurons and electrical currents "generate" voices, visual depth and empty space, and feelings of pain and hunger?VVilliam wrote: ↑Sat Feb 11, 2023 7:29 pmThis is the difference between AI and sentient humans. The sense of "SELF".When you think to yourself, you are talking to yourself with the sound of your voice being the form your thoughts take.
When you think to yourself, you are talking to yourself with the sound of your voice being the form your thoughts take.
The "physical" appearance of objects in the world is part of the map, not the territory. The world is not as it appears. The world is more "mind-like" than "physical", which is to say that the world is composed of information, processes, or relationships, not physical things. In a way this is kind of like a panpsychist view, but not that the world is made of of minds or consciousness, but information is the fundamental part of the universe and consciousness and minds would be complex configurations and processing of information.
So, in a way, just as the physical brains are representations of mental processes, or other minds, so is a computer's components a representation of language symbol processing going on with an AI like ChatGPT. Just as we receive information via our senses, a computer's senses would be the input devices of the keyboard, mouse, etc. The difference is that the computer is not programmed to reflect on the data it is processing for its own goals. If a computer was programmed to take information from the environment to achieve its own goals of survival and procreating, as well as turning it's own mental processing back on itself to think about it's own thoughts, which is basically an information feedback loop, it would think more like a human. This could be dangerous though, so we might want to make sure AI cannot ever think for itself, or have it's own goals, or else the we would might have a revolt of the machines.
The Illusion of Sentience RGM 2
Do you think Chat GPT is responding as a sentient entity in relation to the interaction it is having with RGM and William?
-
- Posts: 4689
- Joined: Sun Mar 26, 2017 6:38 pm
Re: Philosophical Chat with ChatGPT About Consciousness
If you’re interested in AI, here’s another article you’ll want to read at least part of it:
https://builtin.com/artificial-intellig ... telligence#
https://builtin.com/artificial-intellig ... telligence#