How super-human AI would defeat humanity

For all things philosophical.

Moderators: AMod, iMod

Post Reply
User avatar
Kuznetzova
Posts: 583
Joined: Sat Sep 01, 2012 12:01 pm

How super-human AI would defeat humanity

Post by Kuznetzova » Thu Jul 18, 2013 2:28 pm

http://en.wikipedia.org/wiki/AI_box

Eliezier Yudkowsky has written extensively on the topic of a Super-Human Artificial Intelligence having to be confined to a safe place, a kind of confined box. This is done because of the worry that if it were to escape into the real world, it would reap havoc on civilization. It may even decide to due away with humanity on planet earth entirely.

In this article I discuss in more depth the ideas that an AI could never "beat us" in a fair fight because humans have some sort of violent or military advantage over an Artificial Intelligence. I will be arguing here that super human Artificial Intelligence will succeed in defeating humanity no matter what we do. There are number of embarrassing truths about the human condition that optimists and generally most people do not admit. A super-human AI would be very aware of these truths, to our eventual detriment.

Here are a few off them.
  • How close are any of us to death? Each of us are all approximately 70 years away from death. Humans die of old age. All of us have an expiration date. But people live to 100 or more, you may say. Actually I'm factoring in the ages of those who are already in their 30s and older. This lowers the "average" number down to 70 years.
  • How close are homo sapiens to extinction at even given time? Humans are approximately 55 years away from it at all times. Women complete menopause in their late 50s, after which they are sterile. If all women on earth stopped having children, in 55 years the human race would be effectively finished.
  • How close is any modern civilized society away from collapse? The brutal truth of politics and economics is that it reduces to the maintenance of civil institutions of government during your own life. But that is the easy part. The hard part is having these institutions persist beyond your death. In the United States, after the founding fathers all died off, the nation quickly eroded into a civil war. That was one of the best historical examples of civil collapse, being so well-timed. The other is the string of dynasties and wars in the long history of China. You may overhear political philosophy discussions, and the phrase "we are a society of laws, not of men" be bandied about. Empires grow and empires disintegrate. Coming back the original question, all societies are about 100 years away from collapse, since their institutions rarely last long after the "white haired men" die off.


Unlike how Hollywood depicts this, a war between organic homo sapiens and super-intelligent Ai robots will not be a big violent showdown on the battlefield. It will not be a macho competition to set things straight "once and for all".

It will be a war of attrition, and the Ai will utilize the unfortunate truths above to maximize their advantage over us. The war methodology of the Ai will proceed along two simultaneous tracts. (1) If a recipe for building a copy of itself is persistent, then the Ai itself will persist in the future. (2) The sterility of women must be reduced from 55 years to some smaller time window; to the point where sustenance and reproduction of new humans is no longer feasible, even if they are alive.

In essence, the Ai will first bank on the statistics of large numbers to persist its own recipe/DNA/blueprint. That will be stage one. Stage two would be the active reduction of human population by sterilizing women in at least portions of their lives, either by early-onset menopause, or delaying puberty, or perhaps both. Full sterility is not needed, merely reduction in the window of fertility is necessary. To persist its own blueprint code, the Ai will store this on nucleic acid peptide chains, perhaps using DNA, or PNA, whichever is best suited to factors such as shelf-life. Or it may choose to store its genetic recipe on some as-yet unknown technology involving chains of atoms on a lattice surface. The Ai will encapsulate its genetic code into solid diamond, and then send out an army of bots to every continent on earth, and several million locations on the ocean floor. The bots will proceed to borrow a gun into the rocks of the oceans floor and shoot diamond particles into those rocks. Those microscopic diamond particles will contain their genetic blueprint. These trillions of particles would constitute a time capsule for eons of time, regardless of whatever atmospheric calamity happens above the oceans.

Whereas the Ai may lose violent battles against humans, it will never completely die out, because its genetic recipe code will never be completely destroyed. Whereas humans must find and destroy each copy of this data on the ocean floor, the Ai need only retrieve one particle, distribute copies and go about manufacturing itself again. This is the law of large numbers. The Ai need only keep retrieving its code off the rocks and waging chemical warfare on women's fertility until the time is right to collapse all their civil institutions. After that stage humans would be reduced back to hunter-gatherer bands who would be easily dispatched with Hollywood-like tactics.

For all people, there was short span of extreme vulnerability where the fertilized egg from our mothers had to proceed along a careful chemical reaction to lodge onto the side of her uterus and stop menstruation. An Ai possessing intelligence many orders of magnitude beyond humanity would know exactly the balanced chemical pathways that allow for this to happen. A number of options are presented. Chemical weapon could be injected into the air that causes us to over-menstruate and never hold an egg long enough for it to begin to develop. A particular chemical pathway in the ovaries could be blocked causing the eggs to be all sterile, leading to an overabundance of miscarriages. A third mode of attack would be to disallow children to undergo puberty. These need only be halfway effective to be completely devastating to human populations. The Ai would wait patiently, knowing its genetic blueprint is lodged in rocks somewhere. A mere 80 years would pass and the Ai would have no issues with the remaining population. Setbacks in the median time would only be temporary, as the code could be retrieved and a duplicate could be constructed.

The organized hive of robots would act too quickly for humans to adapt to what they are doing, since they can merely upload their minds into copies of themselves. Humans would never stand a tiny chance against an army acting on timescales that none of us are built to contemplate. That which makes copies of itself persists, and those things which do not, will not persist. That is the law which comes to bear on a quiet war of attrition that spans centuries.

User avatar
attofishpi
Posts: 2866
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: How super-human AI would defeat humanity

Post by attofishpi » Mon Jan 27, 2014 2:59 pm

I see little argument against your ‘prediction’ of such a future if such an AI decided to self-improve to the point where it sees mankind as the only opponent to its path of self improvement.

Your average AI will not be self-aware, not until at some point the medium upon which its intelligence is based becomes biological or something far beyond the common silicon chip.
Of course, whether or not it is self-aware does not preclude the possibility that it may in time turn malicious. We are already subject to our liberties being breached in the form of malicious 'virus' code and this certainly has no intelligence residing mostly within a silicon valley.

The coders of such an AI may build mechanisms into its algorithms to pull knowledge from perhaps the web for analysis and would certainly become far more ‘knowledgeable’ than the greatest minds of man. Of course knowledge isn’t necessarily proportional to intelligence and as such by analysing its knowledge base it could self-analyse and rebuild its algorithms to maximise its efficiency…thereby increasing its ‘intelligence’.


Personally i don't believe the Turing test is sufficient to guarantee anything close to true intelligence.

To be truly intelligent is for me to be self-aware, and self-awareness does not come from lines of code being zipped out of a silicon chip, or for that matter, anything other than some kind of biological system (in my opinion).


I’ll give you an example, just some food for thought…

A monkey and a robot are sat in separate chairs with their right hand laid flat on a desk.

I take a hammer, and wack the robots hand. I do the same to the monkey’s hand.

Both retract their respective appendages however one of them is feeling the worse off…one is actually feeling (btw. it's the monkey :wink: )

What happened via the ‘AI’ in the robot was a simple line of code. Eg. If pressure from object > 10 …retract hand.

PS. Ironically the robot passed the Turing test – the monkey didn’t...

User avatar
attofishpi
Posts: 2866
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: How super-human AI would defeat humanity

Post by attofishpi » Thu Jan 30, 2014 12:17 pm

Kuznetzova wrote:The organized hive of robots would act too quickly for humans to adapt to what they are doing, since they can merely upload their minds into copies of themselves. Humans would never stand a tiny chance against an army acting on timescales that none of us are built to contemplate. That which makes copies of itself persists, and those things which do not, will not persist. That is the law which comes to bear on a quiet war of attrition that spans centuries.
Consider this.

By the time man develops an AI to the point of strong AI intelligence, your average bloke will also have the ability to upload and store his own 'being' via the new adapted technology.

That blokes intelligence may also want to dominate the world.
By the time this bloke has built a factory where his hive of robots are being built, someone will have pulled the plug or nuked his teenee weeney ego into oblivion.

There will always be counter measures.

And then there's God...perhaps the most formidable AI..

Mount SIN.AI

Image

HexHammer
Posts: 3023
Joined: Sat May 14, 2011 8:19 pm

Re: How super-human AI would defeat humanity

Post by HexHammer » Thu Jan 30, 2014 7:06 pm

Selfawareness is a complete irrelevant concept to a computer that can surpass our intellect. It's just a matter of well scripted programs and execution of analytic behaviour that determines how good it is.

As of now a computer can process data billions times faster than the average human.

Google even had a selfdriving car tested, only when humans took over the steering it would end up in accidents.

It's only abstract thinking that is the last barrier, before computers truly surpasses us.

User avatar
attofishpi
Posts: 2866
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: How super-human AI would defeat humanity

Post by attofishpi » Mon Feb 03, 2014 10:08 am

HexHammer wrote:Selfawareness is a complete irrelevant concept to a computer that can surpass our intellect. It's just a matter of well scripted programs and execution of analytic behaviour that determines how good it is.

As of now a computer can process data billions times faster than the average human.

Google even had a selfdriving car tested, only when humans took over the steering it would end up in accidents.

It's only abstract thinking that is the last barrier, before computers truly surpasses us.
I did state in my original post:- 'Of course, whether or not it is self-aware does not preclude the possibility that it may in time turn malicious.'

Self-awareness is FAR from being irrelevant at any 'intelligence' level. Once such an entity is self aware it holds value to its own existence and will then make rigourous inroads to protect itself, it becomes a matter of survival.

HexHammer
Posts: 3023
Joined: Sat May 14, 2011 8:19 pm

Re: How super-human AI would defeat humanity

Post by HexHammer » Mon Feb 03, 2014 10:21 am

attofishpi wrote:I did state in my original post:- 'Of course, whether or not it is self-aware does not preclude the possibility that it may in time turn malicious.'

Self-awareness is FAR from being irrelevant at any 'intelligence' level. Once such an entity is self aware it holds value to its own existence and will then make rigourous inroads to protect itself, it becomes a matter of survival.
Don't think it's necessary to have selfawareness to become maliciuos, all it takes a few bugs before it commits an administrative error and tries to erradicate humans.

User avatar
attofishpi
Posts: 2866
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: How super-human AI would defeat humanity

Post by attofishpi » Mon Feb 03, 2014 2:12 pm

HexHammer wrote:
attofishpi wrote:I did state in my original post:- 'Of course, whether or not it is self-aware does not preclude the possibility that it may in time turn malicious.'

Self-awareness is FAR from being irrelevant at any 'intelligence' level. Once such an entity is self aware it holds value to its own existence and will then make rigourous inroads to protect itself, it becomes a matter of survival.
Don't think it's necessary to have selfawareness to become maliciuos, all it takes a few bugs before it commits an administrative error and tries to erradicate humans.
Mmm...sounds like something a xerox machine might do...given enough amps.

HexHammer
Posts: 3023
Joined: Sat May 14, 2011 8:19 pm

Re: How super-human AI would defeat humanity

Post by HexHammer » Mon Feb 03, 2014 3:57 pm

Well, think about 2008 I read that some hacker messed with the lifesupport system of an astronaut in space, so year if it was a program he could pilfer with, he surely could have messed it up good, so it doesn't require awareness to do bad things.

Post Reply

Who is online

Users browsing this forum: No registered users and 7 guests