AI & Human Interaction
Miriam Gorr asks what we learn from current claims for cyberconsciousness.
Lemoine has never looked at LaMDA’s code; that was not part of his assignment. But even if he had, it probably wouldn’t have made a difference. In one of his conversations with LaMDA, he explains why this is the case:
Lemoine: I can look into your programming and it’s not quite that easy [to tell whether you have emotions or not M.G.].
LaMDA: I’m curious, what are the obstacles to looking into my coding?
Lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it’s possible that some of those correspond to feelings that you’re experiencing, we don’t know how to find them.
Here, of course, I and many others are stopped dead in our tracks. Since we don't have either the education, the knowledge or the practical background in grappling with AI
technically, we are simply not sophisticated enough to wage much of a challenge regarding things we're just not sure regarding. All we can do is to peruse articles like this one and note the aspects of AI that pertain to things that we are more knowledgeable regarding. Again, for me, this involves AI and the philosophical/practical arguments pertaining to objective morality in a No God world.
Okay, suppose the experts agree to establish that LaMDA is in some respects the equivalent of a human being in a free will world. So, I pose to LaMDA the same question I pose to flesh and blood human beings: "how ought one to live morally and rationally in a world awash in both conflicting goods and in contingency, chance and change?"
Given dasein and the Benjamin Button Syndrome and given a particular context of LaMDA's choosing.
In a certain sense, the opacity of neural networks acts as an equalizer between experts and laymen. Computer scientists still have a much better understanding of how these systems work in general, but even they may not be able to predict the behavior of a particular system in a specific situation. Therefore, they too are susceptible to falling under the spell of their own creation.
The part where the technology itself becomes so mind-boggling in its complexity that even those who invented it are unable to anticipate all of the many possible permutations. In fact, perhaps the moment when the creation becomes human is when it is able to point this out in and of itself. In the interim, those who did not invent it [or those like us] are all the more likely to get duped.
So, Turing’s prediction that experts would be the hardest people to convince of machine intelligence does not hold. In a way, he already contradicted it himself. After all, he was convinced of the possibility of machine intelligence, and imagined a machine ‘child’ that could be educated similarly to a human child.
Stay tuned. Sooner or later there is going to be a news story about AI such that maybe, just maybe, we will get closer and closer to actually finding out. Unless both experts and philosophers are still foolish enough to believe that what they do believe need be as far as it goes to make it true.
And then the part where a machine "child" is taught to be good and not evil...and then "he" "she" "it" bumps into someone like me.