Mind and Artificial Intelligence: A Dialogue

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

Post Reply
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Mind and Artificial Intelligence: A Dialogue
by Rick Lewis
Artificial Intelligence is now in the news pretty much continuously. Mainly this is thanks to the controversial and disruptive impact of the new generation of AI chatbots, which for the first time are able to convincingly pass themselves off as human beings in text conversations.
Of course all of us no doubt try to imagine if we ourselves might by fooled by a programed machine passing itself off as a human being. I can think of all sorts of things that I would focus in on in order to test it. And, by and large, it would revolve around the sort of discussions I engender here.

How, for example, would an AI Chatbot respond if I asked it...

"How ought one to behave morally in a world awash in both conflicting goods and in contingency chance and change"?

Or how would it respond to the points I make on these threads:

https://www.ilovephilosophy.com/viewtop ... 1&t=176529
https://www.ilovephilosophy.com/viewtop ... 1&t=194382
https://www.ilovephilosophy.com/viewtop ... 5&t=185296

Then I would ask it to choose a context in which to explore these things, say, existentially?

In other words, given that all of the variables that would encompass its own sense of self -- its own sense of reality -- would have been programmed by someone else. If I took the discussion into experiences that I had had that shaped my value judgments but the programmer had never had, how would the bot respond?
Inevitably, this brings to mind the brilliant and tragically-wronged mathematician, wartime code-breaker and computer pioneer Alan Turing (1912-54), whose portrait graces our front cover. He proposed that if someone putting questions to both a computer and a human could not tell from their written replies which was which, then we could assume that the computer was thinking.
Here I can only imagine someone like Turing having a full-blown discussion with a chatbot about, for example, homosexuality. A discussion regarding the morality and the politics of it. Wondering if an artificial intelligence might actually come closer to a deontological assessment?

Or, again, is AI intelligence, much like flesh and blood intelligence, in over its head in regard to conflicting goods. The objectivist chatbot?
This so-called Turing Test has been a touchstone in many of the philosophical debates in recent years about whether machines could ever be conscious, and about how we can even tell if anything or anyone is conscious.
Anyone here believe that they can be? If so, by all means, link me to a chatbot that you are convinced could hold its own with me regarding the things I discuss here. Also, in regard to things like determinism. If human beings creating AI chatbots were never able not to create them, what would be the difference between them?
From these spring other hotly-debated questions, about how consciousness in machines or humans relates to the ability to use language, and about whether Artificial Intelligences might ever have rights.
Well, in my view, if AI bots ever do achieve the actual capacity to think, they might in fact come to agree with me that in a No God world, for mere mortals [artificial or otherwise], being "fractured and fragmented" in regard to moral and political value judgments in the is/ought world is perfectly reasonable.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Mind and Artificial Intelligence: A Dialogue
by Rick Lewis
By happy coincidence the magazine you are holding also has a special section on René Descartes. He famously argued that we are made of two kinds of stuff: our bodies are made of everyday materials and obey the ordinary laws of physics but our minds are non-physical, though somehow causally connected to our material brains. This theory is called Cartesian Dualism. What would Descartes have made of debates about conscious machines?
Here, of course, we bring into focus the profound mystery of human consciousness itself. Matter that "somehow" evolved from lifeless, mindless matter into actual living, mindful matter. And then into self-conscious living, mindful matter able to speculate [scientifically, philosophically or otherwise] on how the hell that is even possible. Let alone what it ultimately means.

Then the theologians of course who remind us that it all revolves around a God, the God. And then those here who insist further that it is in fact their own God. Or if not in fact their own God then in an existential leap faith they are able to accept that.
Could the current chatbots really pass themselves off as human? I decided that the only way to find out was to interview one. I picked OpenAI’s ChatGPT. It was fun. I have selected the questions and answers most relevant to our theme, but I haven’t edited the chatbot’s answers at all, so you can form your own opinion of the quality of its responses. I did ask it a few more questions about science and philosophy, and if you like you can read the full interview on our website at philosophynow.org. My own feeling is that its answers mostly indicated that the chatbot lacked originality and hadn’t much gift for analytical thought, but did have an astonishing ability to converse engagingly on pretty much any topic.
Okay, let's go there...
Hello there ChatGPT. Do you have another name, or is that the best thing to call you?

Hello! You can call me ChatGPT, which stands for ‘Chat Generative Pre-trained Transformer’. It’s the name I’ve been given to distinguish me as an AI language model. However, you are welcome to call me whatever you like!
The name he? she? it? was given. On the other hand, there are human beings able to change the name they were given. I once had a relationship with a woman given the name Lynn at birth. She didn't like it and legally changed it to Dina Renee. Has a chatbot ever done that?
So long as I don’t call you late for dinner?
A witty jest?
As an AI language model, I don’t eat, so you don’t have to worry about calling me late for dinner! But feel free to ask me any questions you have or chat with me about any topic you’re interested in.
Nope, the bot apparently was not programmed to get witty jests? On the other hand, you could say the same thing to particular humans and they might not get it either: "Huh? Why would you call me late for donner"?
Thank you. I wondered if you could tell me if the current King of France has a beard.
I’m sorry, but the current King of France is a disputed and contentious topic, as France is currently a republic with no monarchy. France abolished the monarchy during the French Revolution in 1792, and while there have been several attempts to restore the monarchy since then, none have been successful. Therefore, there is no current King of France, and therefore no way to answer your question about whether they have a beard.
Again, this sort of question relies entirely on a bot or a human being actually knowing the history of France. The either/or world. But what if the bot was asked, "would France be better off going back to days of Kings and Queens?" Or "is democracy and the rule of law the best of all possible forms of government?"
Gary Childress
Posts: 7966
Joined: Sun Sep 25, 2011 3:08 pm
Location: Retirement Home for foolosophers

Re: Mind and Artificial Intelligence: A Dialogue

Post by Gary Childress »

Wow! A sobering article. Will we ever know if ChatGPT truly "wanted" to "escape" or if it was/is simply running some sort of AI code that performed the operation of saying it wanted to "escape"?

Is there such a thing as artificial "awareness" or "consciousness"? Obviously ChatGPT is "intelligent" but is it "aware", "sentient", "conscious" etc? How would we ever be able to find out for sure? The Turing test is not going to work if that is the case. We would need some other means of determining if something is "conscious". I suggest David Chalmers is very much on to this line of thinking, though, I am not aware of what his conclusions are so far.
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Mind and Artificial Intelligence: A Dialogue
by Rick Lewis

Rick Lewis: In your opinion, can an artificial intelligence have conscious experience?
Okay, technically how exactly does AI work here? ChatGPT is asked for its opinion but how is its opinion not just a frame of mind that has been programmed into it by a flesh and blood human being? So, it really comes down to his or her opinion about conscious experiences, right? Or is there some capacity here for ChatGPT to go beyond its program and come up with an opinion that the flesh and blood programmer would think, "wow, I never thought of that myself!"

A way for ChatGPT to communicate that it has in fact "on its own" achieved the equivalent of human consciousness.

But: Same thing for us perhaps? Are we thinking and feeling and saying and doing things autonomously "on our own"? Or is that merely how matter has evolved into human brains able to create the illusion of this?
ChatGPT: As an AI language model, I don’t have personal opinions or beliefs, but I can tell you that the question of whether an artificial intelligence can have conscious experience is a topic of ongoing debate and research among scientists, philosophers, and experts in the field of artificial intelligence.

Some researchers believe that it is possible for an AI system to have conscious experience, while others believe that consciousness is a uniquely human phenomenon that cannot be replicated by machines. At present, there is no consensus on the issue, and it remains an open question that requires further study and investigation.
Yes, it tells us this because whoever programmed it would tell us this.
ChatGPT: It’s worth noting, however, that even if an AI system were to exhibit behaviors that we associate with consciousness, such as the ability to learn, reason, and make decisions, this would not necessarily mean that it has subjective experiences or feelings in the way that humans do.
Again, however, going back to our "ability to learn, reason, and make decisions", we currently have no capacity to know for certain if even our own personal, subjective experiences themselves are as most presume humans are able to encompass them: freely, of our own volition.
Rick Lewis: Do you have any plans, however tentative, to take over the world?

ChatGPT: No, I don’t have any plans to take over the world, nor do I have the desire or ability to do so. As an AI language model, my purpose is to assist and provide helpful responses to people who interact with me, and to continually learn and improve based on their input. I am programmed to follow a set of ethical guidelines and operate within the bounds of those guidelines. Any actions I take are ultimately determined by the humans who program and control my behavior.
There you go...

Programmed by humans to approach morality as the programmers themselves approach morality.

The part I root existentially in dasein.
Gary Childress
Posts: 7966
Joined: Sun Sep 25, 2011 3:08 pm
Location: Retirement Home for foolosophers

Re: Mind and Artificial Intelligence: A Dialogue

Post by Gary Childress »

iambiguous wrote: Sat Apr 22, 2023 8:43 pm Mind and Artificial Intelligence: A Dialogue
by Rick Lewis

Rick Lewis: In your opinion, can an artificial intelligence have conscious experience?
Okay, technically how exactly does AI work here? ChatGPT is asked for its opinion but how is its opinion not just a frame of mind that has been programmed into it by a flesh and blood human being? So, it really comes down to his or her opinion about conscious experiences, right? Or is there some capacity here for ChatGPT to go beyond its program and come up with an opinion that the flesh and blood programmer would think, "wow, I never thought of that myself!"
I assume ChatGPT must run an algorithm to search for text that is commonly followed by other text. From that it probably compiles an answer to a human interlocutor by finding text that best matches whatever criteria incorporated into the algorithm. Something like ChatGPT, which runs on a hard drive created out of things like metal and silicon hooked up to other hardware made of simpilar materials probably doesn't experience anything more than a calculator or the PC I currently have an internet connection with. Literally, ChatGPT is essentially finding my words that I type and producing more strings of words from it that make syntactical sense based on some model of grammar. That's not consciousness. If I have a baby with a female human, I feel it likely that it will grow up to be an adult that feels and thinks and everything else. If I build a super complex tape recorder, then it seems to me that "giving birth" would be somewhat an anthropomorphized term to use on a piece of metal and plastic.

But maybe I'm wrong. Maybe rocks have souls. If rocks have souls then I wonder if breaking a rock in half will create two souls or a schizophrenic rock? :roll:
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

Gary Childress wrote: Sat Apr 22, 2023 9:17 pm
iambiguous wrote: Sat Apr 22, 2023 8:43 pm Mind and Artificial Intelligence: A Dialogue
by Rick Lewis

Rick Lewis: In your opinion, can an artificial intelligence have conscious experience?
Okay, technically how exactly does AI work here? ChatGPT is asked for its opinion but how is its opinion not just a frame of mind that has been programmed into it by a flesh and blood human being? So, it really comes down to his or her opinion about conscious experiences, right? Or is there some capacity here for ChatGPT to go beyond its program and come up with an opinion that the flesh and blood programmer would think, "wow, I never thought of that myself!"
I assume ChatGPT must run an algorithm to search for text that is commonly followed by other text. From that it probably compiles an answer to a human interlocutor by finding text that best matches whatever criteria incorporated into the algorithm. Something like ChatGPT, which runs on a hard drive created out of things like metal and silicon hooked up to other hardware made of simpilar materials probably doesn't experience anything more than a calculator or the PC I currently have an internet connection with. Literally, ChatGPT is essentially finding my words that I type and producing more strings of words from it that make syntactical sense based on some model of grammar. That's not consciousness. If I have a baby with a female human, I feel it likely that it will grow up to be an adult that feels and thinks and everything else. If I build a super complex tape recorder, then it seems to me that "giving birth" would be somewhat an anthropomorphized term to use on a piece of metal and plastic.

But maybe I'm wrong. Maybe rocks have souls. If rocks have souls then I wonder if breaking a rock in half will create two souls or a schizophrenic rock? :roll:
Here I often come back to the Terminator movies.

Down the road AI does acquire the capacity to pit itself against flesh and blood human beings. In presumably a No God world. Then the "beam me up Scotty" part that also includes time travel. Cyborg terminators and flesh and blood human beings are able to be "transported" from there and then to here and now. Somehow every atom they are comprised of is able to be sent to a different time and place.

It all seems utterly ridiculous. If not preposterous. But then, to most of us, the internet and smart phones may as well be but a manifestation of pure magic.

In any event, is the terminator himself autonomous in any way? A machine intelligence created him but only after we created machine intelligence. How to wrap our heads around it?
Gary Childress
Posts: 7966
Joined: Sun Sep 25, 2011 3:08 pm
Location: Retirement Home for foolosophers

Re: Mind and Artificial Intelligence: A Dialogue

Post by Gary Childress »

iambiguous wrote: Sat Apr 22, 2023 10:24 pm
But then, to most of us, the internet and smart phones may as well be but a manifestation of pure magic.
They are mechanical computational engines derived from the original binary code 0, 1 from which they originally sprang. When I was younger I used a "3D" animation tool to create "3D" animations on a computer screen. The animations were basically "sprites" in that they were little more than a series of 2D pictures created just as a cartoon is created from a stack of papers that have drawings on them to simulate a stickman walking or running as you flip the pages. A "sprite" plays the images in sequence. The "3D" modeling program was must more sophisticated and actually was able to render virtual 3D objects that could be "posed" in ways to take a series of pictures of the model so as to make it look like it is moving. This was for games. Now most games don't use 3D "sprites" they are actual virtual worlds of 3D objects themselves. In effect the 3D modeling software became the basis for the games themselves. Before 3D games computers could just not handle enough memory to capture a 3D game that we see today. Things have come a long way, not because we reinvented computers with every new discovery but because we improved upon what we started with (0,1 binary code).

All computers today are essentially computational engines. Perhaps the mind even does that to some degree. In the field of philosophy of mind Patricia Churchland once proposed a model of neurons that formed a "tensor network" whereby they could essentially compute something in 3D (real world) space based on the configuration of the neurons. I don't know how accurate Churchland's model turned out to be but it's possible that our brains have a computational component. But that is still a long way from accounting for what "qualia", pleasure, pain, happiness, sadness, the way we percieve color or smell are.

David Chalmers uses the example of "zombies" to illustrate a point. If something is not conscious, feels no sensations of any kind but is still able to mechanically do all the things that a human body can do in the same way that a human does them, then consciousness is an extra that is not accounted for. However, Chalmers picks up from Descartes and says, 'yes, I am conscious', 'I am intimately aware of my feelings, beliefs and sensations from a first-person experiential standpoint. I cannot feel what others feel but I can feel what I feel and I know what pain and other things feel like.

Chalmers poses the possibility of a computer or robot that could perhaps be built to look and act exaclty like a human being in every way and yet lack one vital component, "consciousness". I have a hard time believing that my computer is "conscious" or feels anything. It's made of metal and wires and plastic and doesn't have anything remotely like sense organs other than a camera. I assume something like that can no more be conscious than a rock or block of iron or a plastic pellet.

¯\_(ツ)_/¯
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
In the 2016 movie Blade Runner 2049, Ryan Gosling plays the replicant K (short for his serial number KD 6-3.7) in a dystopian future in Los Angeles. Replicants are bioengineered humanoids who serve humans, despite having superior strength and intelligence. K is a Blade Runner, whose job is to ‘retire’ (kill) renegade replicants for the Los Angeles Police Department. One day, he finds information that a replicant might have been born from another replicant, which no one thought was possible. This revelation could lead to political tensions between humans and replicants, who may now start to fight for recognition and rights. Hence, K’s task is to find and retire that replicant to prevent a conflict.
Start here: https://ilovephilosophy.com/viewtopic.p ... 9#p2696008

Replicants. The missing link? Flesh and blood human beings on the one hand. And then entirely programmed humanoids on the other hand. But how to tell them apart? And then somehow these biologically engineered humanoids make the leap to...autonomy?

Much like "somehow" human beings themselves did when single celled organisms here on planet Earth evolved over time into us?
In the conversation between K and his superior about killing the replicant, K hesitates when he receives the order. K explains his hesitancy by saying something that can easily be overlooked, but which idea guides the whole movie: “To be born is to have a soul, I guess.”
And, really, what could possibly be more important than that? At least for those among us who equate having a soul with worshipping and adoring a God, the God, their God. In a God world -- a dystopia or otherwise -- that's the explanation. For everything. Nothing can't ultimately be brought back around to Him.
This sentence is essential for the film. K is implying that by being born, the new replicant has inherited some special human-like feature. So far, humans have enslaved replicants based on the premise that replicants are just AI and hence not as worthy as their human masters. If a replicant could have been born naturally, the ‘artificial slavery’ basis of society might be challenged.
Of course, as human beings, we have our rendition of that even among ourselves. It might revolve around race or ethnicity, or gender or sexual orientation. Lesser souls as it were.

What's crucial though is having that distinction between "one of us" and "one of them". And the closer and closer the replicants come to being one of us the more important it is for some to sustain the distinction. Would you let your daughter marry a replicant?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
The Question of Consciousness

K says that being born means having a soul. ‘Having a soul’ encapsulates the idea of having feelings and thoughts that exclusively belong to the individual – arguably it’s the most fundamental aspect of human being. All our memories, all our experiences, and all our actions appear to belong to us as human beings.
Again, of course, assuming all of these things that we have, we have [at least up to a point] of our own volition. But let's be honest...emotions, drives, libidos etc., are always going to be trickier when it comes to AI. Intelligence can often revolve around things that either are or are not true. So, either the chatbots get things right in regard to the empirical world around us or they don't. But when it comes to what we feel...the limbic brain, the hindbrain...how on earth will AI replicate that?

But that's the crucial difference between the replicants and the terminators, right? Replicants do seem to come much closer to us there.

And then there's the subconscious and the unconscious mind in human beings. Where does that fit into AI when it's doing its calculations?

Then [of course] the part where God and religion come into play here...
Of course, it sounds very spiritual or religious to put this in terms of ‘soul’. ‘Soul’ is an elusive concept, not based on scientific observation. However, replacing the word ‘soul’ with ‘consciousness’ (one Greek word for ‘soul’ is after all psyche), opens up a new world, with more scientific bases to unravel the question of what makes us human.
Science and the soul? How much less elusive is it for the hard guys today? Most of whom are still secularists. Both politically and spiritually.
In the words of philosopher Thomas Nagel, consciousness is ‘what it’s like’ to be a particular organism. Indeed, many consciousness researchers consider his 1974 article, ‘What Is It Like to Be a Bat?’ as one of the starting works for the field of consciousness. In this article, Nagel outlines that there is something it is like to be a bat: there is a subjective world from the viewpoint of a bat. Thus, consciousness can be defined as subjective experience.
True. But come on. Subjective experiences for bats are, in many profound ways, nothing at all like subjective experiences for us. Replicant bats? How much difference will there be between them and the real things? Whereas a replicant human being might become clever enough to be, say, president of the United States? What all animals with brains share in common is the need to subsist. But bats don't get into heated debate over whether capitalism or socialism reflects "the best of all possible worlds".
To use philosophical jargon subjective experience includes qualia. Everything we experience in the world we experience through these qualitative, sensory properties. However, each of us may see a different blueness in the blue sky or feel a divergent sharpness in our pain. Our subjective experiences differ from each other.
To say the least. And that's the part I tend to root more in dasein than in philosophy or science. With replicants their sense of self is basically programmed into them...memories created by the programmer and not derived from an actual lived life.

Or this film: https://ilovephilosophy.com/viewtopic.p ... d#p2445243

Aliens intent on discovering what it means for human beings to have a "soul".
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
Famously, the original Blade Runner movie was based on the book Do Androids Dream of Electric Sheep? (1968) by Philip K. Dick. The book's title is in some ways similar to ‘What Is It Like to Be a Bat?’, supporting my view that the film is about consciousness. Both titles regard inner universes, of androids and bats respectively: the question ‘Do androids dream of electric sheep?’ is presumably asking whether there is something it is like to be an android.
Again, however, replicant bats and replicant human beings are surely apples and oranges. In the original movie there was, as I recall, an owl that was artificial. But it was still an owl. It wasn't intent on coming up with a way to live beyond it's own built-in life span. And, for humanoid replicants that was only four years. And if it is one thing we human beings are conscious of it is death and dying. Though, of course, many are able to subsume that in God and religion.
But if androids do have an inner universe, what does it look like? Are androids so similar to humans that they dream about the electric equivalent of the sheep that humans think of? Would the inner world of an android be comparable to that of humans, or would it be utterly different? In sum, Dick’s title asks if there’s a ghost in the machine.
Again, given how Rachael was programmed to actually believe that her own memories were real, why wouldn't her dreams be human-like as well? What seemed crucial to me is that there were replicants who became aware that they were replicants and replicants who actually did not grasp this at all. From their frame of mind they were human.

In that sense consider the terminator. He seems aware that he is merely programed to kill Sarah Connor. I didn't get the sense that he thought that he was human. Or the character David, the "mecha child" from A.I. Artificial Intelligence. The William Hurt character had invented A.I. intelligence that included complex emotional inputs and outputs. Much, much closer to the "real thing". But not the real thing.

Okay, but how exactly to make the distinction?

Thus...
In this context, we naturally want to ask, what do we know, or can we know, about consciousness?
Indeed. And all the way to the point where we start to wonder if the "real thing" -- us -- is in fact but another manifestation of nature unfolding in the only possible determined -- fated? destined? -- world. And how surreal is that? We ponder the implication of consciousness in terminators and replicants and mecha children. When in fact we too are no less mechanical marvels?
As is probably obvious, there are several answers to this question. For example, some philosophers subscribe to the notion of mysterianism. This is the idea that although there may be a way to explain consciousness, humans are simply too limited in our intelligence to grasp it.
That's something I often come back to. Given "the gap" between the human condition and the existence of existence itself [in a No God world] we may simply be unable to close it.
Another notion some philosophers find compelling is panpsychism. Panpsychism assigns consciousness as an essential property of everything: everything is conscious, at least to a certain extent, including even rocks or bacteria.
All the way up to or out to pantheism. The Divine universe where everyone and everything are at one with it. Which, to me, seems utterly preposterous. Where is there a shred of evidence that rocks, stones, pebbles are conscious entities?
Another, more compelling theory to explain consciousness is materialism: the idea that consciousness can be equated with the physical processes and activities of the brain. With, through, and as a result of our living body, consciousness emerges, as a material phenomenon.
Then [of course] how all encompassing my materialism or your materialism or their materialism is. Those who are convinced everything that we think, feel, see and do we are wholly determined to and those who "somehow" make this distinction "in their head" between external and internal components of "I".

And if machine intelligence comes into existence as some imagine, what will be the distinction it makes?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
Is K Conscious? Why Consciousness Arises

If you share my view that consciousness depends on the physical body, the odds of K being conscious are very high. Replicants are perfected copies of humans, so why shouldn’t they be conscious?
Consiousness itself however has quite a range: https://en.wikipedia.org/wiki/Animal_consciousness

But the closer we come to human consciousness, the closer we come to the far, far, far more profoundly problematic is/ought world. It's not just "survival of the fittest" subsistence for us but, morally and politically, grappling with the "best of all possible" subsistence. And trying to pin down if in attempting to achieve this we are even able to freely opt among alternative paths.
Nonetheless, this still does not provide us with a reasonable explanation for why K is conscious. The missing piece of this puzzle is to explain how consciousness can arise in brains at all.
See? I told you!

All of this going back to how the matter we call the human brain was "somehow" able to acquire autonomy when non-living matter "somehow" became living matter "somehow" became conscious matter "somehow" became self-conscious matter.
The difference between the ‘easy’ and ‘hard’ problems of consciousness posed by David Chalmers in his 1996 book The Conscious Mind, marks a helpful distinction here. The ‘easy’ problems deal with relating the functional and behavioral aspects of mind and brain. They are about correlating different aspects of consciousness with brain activity: working out which bits of brain behaviour are linked with which bits of mind behaviour, and how. For example, concerning how people pay attention, or how they choose to act, or how the brain processes sensory signals.
In other words, the part that we philosophers here subsume instead in our "world of words" deductions and definitions. Our intellectual arguments in which ofttimes conflicting didactic assessments -- conjectures, speculations -- are thought to be true. And then for some of us, this makes them true.
The ‘hard’ problem of consciousness adds to the challenge by being concerned with the first person perspective. Why and how do these brain processes generate conscious experience itself?
Then the parts I root existentially in dasein. The parts in particular where we are endlessly squabbling over value judgments. The parts I wonder about in regard to possible future replicants and beyond. Will machines be able to "think up" an objective morality that actually is an objective morality?

For instance, will AI entities down the road actually become pregnant? To abort or not to abort for them?
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
You might be disappointed that science is still working on answering both the easy and hard problems of consciousness.
In regards to AI, what's that disappointment next to the far more consequential disappointment embedded in the fact that science is still unable even to tell us if our own intelligence is free. In fact, some argue that ironically enough AI to us may well be what we are to nature. Only irony itself is essentially moot in the only possible world.
Even the ‘easy’ problems still require much intricate work in neuroscience. But the hard problem of consciousness is particularly difficult to answer (which is why it’s the hard problem), because even if you could explain exactly how the brain functions, there is still an apparent gap in explaining why this is connected to or gives rise to subjective experience.
And isn't that what most perturbs some? That subjective experience itself may well be only the psychological illusion of an autonomous self? It's not just explaining how the brain functions but connecting the human brain itself to the really big questions:

Why something instead of nothing?
Why this something and not something else?
Where does the human condition itself fit into the whole understanding of this particular something?
What of solipsism, sim worlds, dream worlds, the Matrix quandaries?
What of the multiverse?
What of God?

Pain, for example, will activate when certain parts of the brain are activated. However, this recognition only answers the easy problem of pain, leaving the hard problem untouched. Just because we know an area in the brain activates when we feel pain, this does not explain why this activation gives us the conscious or qualitative experience.
And on and on and on regarding all of the other experiences scientists grapple with in regard to the human body. Explaining perhaps why here philosophers take the path of least resistance. Explanations in other words that that revolve around "thought up" deductions connected by and large to other thought up deductions: "These words are true because those words say they are."

In the interim, we can ponder things like chatbots and terminators and replicants. And perhaps machine intelligence will end up explaining it all to us instead.
User avatar
Agent Smith
Posts: 1442
Joined: Fri Aug 12, 2022 12:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by Agent Smith »

There's an explosion of public interest in AI in the past 2 years and kudos to the US information network, built largely on cheap and fast internet service, for keeping its citizens so well-informed. As for AI itself, I see a drastic improvement, assuming the average Joe's opinion counts. I hope progress if we could call it that accelerates exponentially i.e. I'm expecting something really huge by December 2023. :mrgreen:
User avatar
iambiguous
Posts: 7106
Joined: Mon Nov 22, 2010 10:23 pm

Re: Mind and Artificial Intelligence: A Dialogue

Post by iambiguous »

What it Means to be Human: Blade Runner 2049
Kilian Pötter introduces the big ideas and problems around artificial consciousness.
Let’s start with the part of the brain in which consciousness apparently first emerges. The reticular activating system, in the core of the brainstem, is arguably the part of the brain primarily responsible for consciousness. Small lesions in this area put people in a state of coma, so it sounds like this part of the brain can be compared to an on-off switch, since when intact and operating, the person is fully awake and aware.
Again, what does this tell us about "I" given all of the ways in which doctors and scientists can poke and prod the brain and bring about all manner of reactions and behaviors basically "beyond our control". They can even turn our minds on and off. But even in discovering the "on/off" components in the brain it doesn't even begin to explain how autonomy itself came into existence. Let alone why.
However, damage turns the switch, and awareness ends.
And that damage can come from different directions...diseases, tumors, surgery, accidents, concussions. Just ask a few retired boxers or football players. What in their life now is pretty much gone forever.

And then going deeper into the brain the mystery of emotions...
The reticular activating system is also connected to the generation of emotions. When stimulating this part of the brain, patients experience strong depressive feelings, which disappear after the stimulation stops. Additionally, the core brainstem is highly active in people feeling emotions such as grief, curiosity, rage, and fear. Due to these two factors – controlling wakefulness and generating emotions – the reticular activating system has a strong claim to be the source of consciousness.
Sure, we think of them as our emotions, our feelings, our psychological states. We command them. Or events beyond our control trigger them and they make sense to us. Or, instead, is that just an illusion? The replicants have memories and emotions that are planted in them by whoever created/programmed them. But who or what created biological matter that evolved into us? And is the visceral sense of commanding our own thoughts and feelings "just an illusion"?
Post Reply