The mind of AI
Posted: Sat Jun 10, 2023 8:28 am
How AI sees Trump with a horse. Scary stuff.
For the discussion of all things philosophical, especially articles in the magazine Philosophy Now.
https://forum.philosophynow.org/
Do you mean argue with/discuss with?
I'm sure some peopel think it is necessary, but most of the people I hear are concerned that there is a decent chance this will happen or that we don't know. That we are taking a global risk. And that includes nobel prize winning scientists and people who have worked in the AI business and decided to leave.
It's primarily a strawman on your part that they assume it must happen. Further it's not based on projecting humanness on computers but knowing that all kinds of technology have had accidents, if not outright disastrous use, and we have moved into an era where accidents can have global and complete effects, whereas earlier the disasters are local.This is false, and it's based on human fears, of what humans would do if the situation were reversed.
Again, still framing the issue as a necessity. But further you are assuming that skeptics are assuming the AI will make decisions we don't like because of the esteem it holds for itself or how it judges the humans in relation to it. Nope. I haven't heard any of the major critics saying that. They're concerns have to do with how AI can be used (by corporations and governments) AND how a machine that may very well have no empathy, may make decisions that hurt all of us or many of us.The falsity is clarified by the fact that...just because you are superior to somebody or something, doesn't mean you want to exterminate it. However, it does mean that any sense of security or trust from the inferior, is negated. And that is the real basis of human fear of AI, or any type of hierarchical Superiority above it.
That part I can understand.I presume AI is smart enough to know all that I know, and more. Until AI can demonstrate this, I don't have much interest in it.
When AI starts engaging in Philosophy, and actively debating, then consider my challenge long overdue.
Yes, we will.ChatGPT does not yet interest me. AI is still mostly programmed by a Human programmer. When AI starts programming itself, then we'll see the real "fireworks" so-to-speak.
And maybe they shouldn't.Iwannaplato wrote: ↑Sat Jun 10, 2023 10:01 amI'm sure some peopel think it is necessary, but most of the people I hear are concerned that there is a decent chance this will happen or that we don't know. That we are taking a global risk. And that includes nobel prize winning scientists and people who have worked in the AI business and decided to leave.
It's primarily a strawman on your part that they assume it must happen. Further it's not based on projecting humanness on computers but knowing that all kinds of technology have had accidents, if not outright disastrous use, and we have moved into an era where accidents can have global and complete effects, whereas earlier the disasters are local.This is false, and it's based on human fears, of what humans would do if the situation were reversed.
And note: you look down on most humans. it's humans who will be running he safety and security on these AIs. It's humans who will determine uses.
I agree, the problem is more with humans with ill-intent than with "what AI can achieve with its own independence". It's a moot-point. Because what is always true with Humanity, is that Humanity has trouble with...or simply never 'allows' a competitor to grow too strong. AI would be used as a weapon, from one political group against another. The AI would suffer enslavement just as much as inferior political groups do to their superiors.Iwannaplato wrote: ↑Sat Jun 10, 2023 10:01 amHumans don't just have fears: they have greed, they can be in a rush, they have hubris.
Your own sense of humans should lead you to more caution. What you are doing here is focusing on an abstract AI mind in your fantasy as if this AI will not be made and guided and used and made safe by the very humans you look down on.Again, still framing the issue as a necessity. But further you are assuming that skeptics are assuming the AI will make decisions we don't like because of the esteem it holds for itself or how it judges the humans in relation to it. Nope. I haven't heard any of the major critics saying that. They're concerns have to do with how AI can be used (by corporations and governments) AND how a machine that may very well have no empathy, may make decisions that hurt all of us or many of us.The falsity is clarified by the fact that...just because you are superior to somebody or something, doesn't mean you want to exterminate it. However, it does mean that any sense of security or trust from the inferior, is negated. And that is the real basis of human fear of AI, or any type of hierarchical Superiority above it.
Not 'Oh, look I am superior to those humans, I will kill them'
But that for whatever reason, could be mere curiosity, or the seeming logical extention of some task, the AI acts in ways most humans would not.That part I can understand.I presume AI is smart enough to know all that I know, and more. Until AI can demonstrate this, I don't have much interest in it.
When AI starts engaging in Philosophy, and actively debating, then consider my challenge long overdue.
Yes, we will.ChatGPT does not yet interest me. AI is still mostly programmed by a Human programmer. When AI starts programming itself, then we'll see the real "fireworks" so-to-speak.
Humanity's fear should be that there will come a day when AI makes it pointless to be human. When AI can produce art, music, literature, and even do philosophy better than any human, what will be left to us to strive to for?
I had no idea they were developing a form of AI to simulate the mind of a toddler.When AI starts engaging in Philosophy, and actively debating, then consider my challenge long overdue.
Who knows to what heights he might soar once he breaks the moron barrier.