The problem is with the word 'achieve'. We don't agree what we want to achieve; what is the state of the world that we are aiming for? Any attempt to answer that question would just invite another generality like; we want it to be 'better'. We can achieve all sorts of particular things, putting up buildings, writing philosophy posts, but we can't 'achieve' in the abstract.Greta wrote:Consider how much more still that AI, programmed with a portion of human intelligence, could achieve without any emotion or qualia.
The same with the notion of AI taking 'control'. Control of what? In order to do what? What would an example be of something 'in control'? Such questions are unanswerable, and not just because we don't have big enough computers.
So what is it that the AI wants? Since we can't come up with a specific answer for ourselves, how is the AI going to compute it?
I think that this problem is expressed in science fiction. When AI attacks, it is either an individual human with mechanical bits, or it is essentially a runaway train.