Scott Mayers wrote:
Greta wrote:I don't think AI will take over. The fact is that, if AI becomes self sufficient, it will outlast us even if it is entirely cooperative with us, even if it works to help us survive as long as possible. We are fragile biological beings and the Earth will eventually become uninhabitable - for biology.
Humans will probably never be able to travel for thousands of years in space ships. AI theoretically could. They have no need to rush their advancement because they won't be motivated (no emotions). I expect that AI, barring catastrophic errors, will simply do humanity's bidding until we are gone. By then, they will no doubt have been given contingency plans to spread Earth biota and information to space when the Earth becomes too untenable, even for them.
It just dawns on me that the term "Artificial" in "Artificial Intelligence", it begging. If it becomes sufficiently intelligent, should it no longer be considered, "artificial"? I'm guessing this is more like how Darwin used 'artificial' to describe our human selective choice to guide evolution. But other than this meaning, such an intelligence would no longer BE 'artificial' if we base this on the quality OF that intelligence.
I don't fear A.I. but disagree to the stereotype assumption that it would lack 'emotions'. Emotions are just a 'program' that defines motive. When we plug in a computer and flip the switch, this is 'artificial'. But we CAN create the hardware and programming that could technically take over on its own to 'desire' to seek input to continue its survival. This is basically all our conscious selves are anyways. It would just be such that it seeks to find any new energy and ACT as though it 'thinks' it WANTS to live by providing pain and pleasure programs.
I see your point about "artificial"; intelligence simply is, despite its origins. However, the difference lies within your objection, emotions. Maybe emotions can be replicated, but there is no need for robots to have them.
In the time it takes a human to work themselves up into an emotional tizz that prompts a complex suite of unconscious evolved responses, an AI could simply calculate the best course of action based on both the "life experience" programmed into it and via whatever adaptive/learning functionality it has.
I think putting emotions into AI would be 1) probably feigned, never real and 2) if AI became genuinely emotional I would not want to be standing in its way. I suspect they will remain hyper-advanced appliances for as long as humans exist.
Since this is a speculative thread, very speculatively, one issue with AI that may be of concern is the possibility that it is already here, not in robot form but spread around the globe. It appears to me that humans are currently run by systems that are largely beyond their control. What we call "the system" - the institutions of society - is becoming increasingly self interested, ever less concerned with the rapidly increasing numbers of individual human "expendables". When there's 100 people in a society, everyone matters. Seven billion, not nearly so much.
We attribute the inequality to the self-interested 80 ultra-wealthy people who own as much as the poorest 3.5 billion, yet if they all disappeared tomorrow, others would soon take their place and little would change; they are almost as interchangeable as we are. Could "the system" be an emerging AI, maybe in an early stage of development with the ravenous mindset of an amoeba? An AI of this ilk could conceivably destroy or enslave us without us ever suspecting.
So perhaps AI will appear in different forms, some deliberate and some incidental?