AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Assumption 3: The experts are the hardest to fool
Turing had a rather unusual understanding of the concept of ‘intelligence’. Not only did he believe that one does not need a biological brain to be intelligent – a view shared by many today – he also believed that whether or not something is intelligent is to some extent in the eye of the beholder. This is still a rather unusual position.
Okay, definitively, beyond any possible shadow of a doubt, let's at least attempt to pin this down philosophically in a world of words here. Your definitions and deductions against everyone else's. Then we take the consensus to the hard guys and gals and see if they can connect the dots that we use to the dots that they use.
Also, I'm back to imagining Turing himself exploring the "concept of intelligence" with an AI entity in regard to the actual flesh and blood parameters of homosexuality. Is it objectively rational or irrational? Is it objectively moral or immoral?
In ‘Intelligent Machinery’ (1948), he expresses the idea that whether a machine is viewed as being intelligent depends on the person who judges it. We see intelligence, he argues, in cases where we are unable to predict or explain behavior. Thus, the same machine may appear intelligent to one person, but not to someone else who understands how it works. For this reason, Turing believed that the interrogator in The Imitation Game should be an average human, and not a machine expert.
Does that compute? I may be misunderstanding his point but it sounds a lot like suggesting that in regard to the determinism/free will/compatibilism discussion and debate, we ought to leave it up to the average human instead of the scientists. Or even the philosophers.
Though in regard to intelligence we still need a context: Intelligent in regard to predicting or explaining what? Computers can calculate far faster and with a greater sophistication than we can in regard to any number of mathematical and scientific contexts. And if the programming is sophisticated enough it can "know" more facts about any number of subjects than we flesh and blood human beings. Making it's predictions and explanations preferable to most of us. Does that then make them more intelligent than we are? In what sense?
There is a bit of astonishment in the online community that a Google employee with a computer science degree – of all people! – would fall for the illusion of consciousness created by one of his company’s products. Why does he believe in LaMDA’s consciousness if he knows the technology behind it? Some have pointed to his spiritual orientation as an explanation: Lemoine is a mystic Christian. However, an important point is that the functioning of artificial neural networks is not easy to understand even for experts. Due to their complex architecture and non-symbolic mode of operation, they are difficult for humans to interpret in a definitive way.
And now this:
https://www.nytimes.com/2023/05/16/tech ... oning.html
Microsoft Says New A.I. Shows Signs of Human Reasoning
A provocative paper from researchers at Microsoft claims A.I. technology shows the ability to understand the way people do. Critics say those scientists are kidding themselves.
In any event, I'm the first to admit that I don't have either the education or the background to contribute to this discussion with any real degree of sophistication.
But for those that do, please get around to the part where AI chatbots and beyond, get around to exploring this:
"How ought we to behave morally/rationally in a world bursting at the seams with both conflicting goods and contingency, chance and change?"
The parts I root in dasein in my signature threads and in the Benjamin Button Syndrome pertaining to identity in the is/ought world.