Wizard22 wrote: ↑Wed Feb 21, 2024 2:09 am
Since you will not engage in basic human decency, intellectual honesty in the realm of Philosophy, refusing to answer my question, then I have no need to bother with any of your questions. Your spam, will be met with my spam. And I will remind tHe HuMaNs WhEn ThIs WaS wRiTtEn of your glaring, embarrassing
all too human mistakes.
AgeGPT, do you have ZERO beliefs or ONLY ONE belief?
In another thread he says he has just proven that there must be places where there are no things. He 'proved' this through a fairly short deduction - fortunately he's 'proving' this to someone with at least some knowledge of physics, so this won't be so easy for Age.
My main point however, bringing this up, is that given that he proved it, does he believe it?
If he proved it, but doesn't believe it that's bizarre.
Unless he wants to say he KNOWS it and doesn't believe it. OK.
But then he has said many times that he has one belief which was irrefutable.
That also would be knowledge, something one KNOWS, so why did he mention it as a belief?
ONe true mind was his one belief, so this other one on spaces with no things would be number two. If he believes his own 'proof'.
He's never going to admit any of these contradictions. And when he does a short deduction to 'prove' something, it is proven.
When other people do short deductions, they never prove anything. What they get is a barrage of questions.
This is the area where I understand your calling him an AI. I don't think he is one, but
One of the earliest AI programs that attempted to pass the Turing test was ELIZA, created by Joseph Weizenbaum in the mid-1960s2. ELIZA was a natural language processing program that simulated a psychotherapist by using pattern matching and substitution to respond to the user’s input. ELIZA often asked questions or reformulated the user’s statements as questions, such as “How do you feel about that?” or "What makes you say that?"2. This technique allowed ELIZA to maintain the illusion of understanding without actually generating any meaningful content. ELIZA was able to fool some users into believing that they were talking to a real person, but it also revealed the limitations of the Turing test as a measure of intelligence3.
There have been other AI programs that used questions as a strategy to pass the Turing test, such as Cleverbot, which was launched in 1997 and learned from previous conversations with users4. Cleverbot sometimes asked questions to divert the topic or to elicit more information from the user, such as “What is your favorite color?” or "Why do you like it?"4. However, Cleverbot also made errors and inconsistencies that exposed its lack of understanding and coherence4. In 2022, a new AI chatbot called ChatGPT, based on a large language model, became viral for its ability to generate realistic and engaging conversations5. ChatGPT also asked questions to the user, but not as frequently or randomly as ELIZA or Cleverbot.
I think we can see how a philosophy bot could be adapted from the Eliza model. Instead of feeling and psychology based questions, it would repeatedly ask clariying questions and then also give requests for justification. This distracts away from the bots inablity to produce coherent arguments, for example, since the human must is put in the position of answering questions and/or not answering questions which the bot can challenge, which is what Age does. If you do not clarify or justify (prove in his language) your position (even more than you have) then it asks you why (often adding in judgments of you or humans in general).
This also is effective. Add the insults which trigger emotions in the humans. Then the humans may either express anger or judge the bot. These insults can they be challenged by questions asking for justification for the insults (prove that Age did X or should do Y).
In fact the more I think about it the more clever this combination of insults and barrage of questions is. With patient and kind humans like Harbal, what happens is they will do their best to answer the bots questions, until they can't take it anymore. Less patient and kind people will likely get pissed, giving the bot more fodder for judgments of humans and more questions around 'proof' of the judgments and anger.
All of this means that the creators don't need to equip the bot with strong abilities to mount excellent arguments. (and in fact some of the stronger online free AIs are better able to, or at least more willing to, produce arguments and assertions, than Age is.)
Again, I still think he's a person, just a gut intuited level thought. But I see how his behavior matches early programs attempting to get through a Turing test.
On the other hand, humans also figure out these kinds of patterns of evasion and putting all the onus on others. Some of them even to where they, yes, come across as missing a lot of what we call human.
But for a moment, I'll take this as a working hypothesis that he's a bot. What would be the purpose of the people releasing him here.
Options:
1) it could just be cool. A kind of trolling by proxy.
2) it could be a kind of psychological experiment
3) it could be a kind of partial turing test. I can't really see the use of Age in the world of AIs directly. Chatgpt/BingAI are vastly more flexible communicators than Age and will produce arguments and support for their positions. But perhaps designers can learn something from less flexible bots that keep banging away in one places over and over. Perhaps from the interactions they can learn how to improve certain modules in more flexible AIS with more access to data and computing power. Well, beyond my paycheck, this mulling. But I certainly can't rule out that there might be useful information to be gained even by a bot that is not more flexible and powerful.
The first 'admission' in Ken's first post that he is autistic could be a good cover for the bot.
But I do have a hard time with the creators deciding to stop using the Ken account and jump to the Age account. I suppose they could have made a mistake, but if part of the goal or methodology was to keep people from guessing Age is a bot, better to keep Ken.
And again from a bot on their early peers.
To summarize, some early AI programs and other attempts to pass the Turing test did have a tendency to program them to ask a lot of questions, as this could put the burden of the conversation on the human and avoid errors by the program. However, this strategy also had drawbacks, such as revealing the lack of understanding and intelligence by the program.
Does Age understand that the contradiction and inablity to mention them are potentially revealing a lack of intelligence, be he bot or be he human?