Eliezier Yudkowsky has written extensively on the topic of a Super-Human Artificial Intelligence having to be confined to a safe place, a kind of confined box. This is done because of the worry that if it were to escape into the real world, it would reap havoc on civilization. It may even decide to due away with humanity on planet earth entirely.
In this article I discuss in more depth the ideas that an AI could never "beat us" in a fair fight because humans have some sort of violent or military advantage over an Artificial Intelligence. I will be arguing here that super human Artificial Intelligence will succeed in defeating humanity no matter what we do. There are number of embarrassing truths about the human condition that optimists and generally most people do not admit. A super-human AI would be very aware of these truths, to our eventual detriment.
Here are a few off them.
- How close are any of us to death? Each of us are all approximately 70 years away from death. Humans die of old age. All of us have an expiration date. But people live to 100 or more, you may say. Actually I'm factoring in the ages of those who are already in their 30s and older. This lowers the "average" number down to 70 years.
- How close are homo sapiens to extinction at even given time? Humans are approximately 55 years away from it at all times. Women complete menopause in their late 50s, after which they are sterile. If all women on earth stopped having children, in 55 years the human race would be effectively finished.
- How close is any modern civilized society away from collapse? The brutal truth of politics and economics is that it reduces to the maintenance of civil institutions of government during your own life. But that is the easy part. The hard part is having these institutions persist beyond your death. In the United States, after the founding fathers all died off, the nation quickly eroded into a civil war. That was one of the best historical examples of civil collapse, being so well-timed. The other is the string of dynasties and wars in the long history of China. You may overhear political philosophy discussions, and the phrase "we are a society of laws, not of men" be bandied about. Empires grow and empires disintegrate. Coming back the original question, all societies are about 100 years away from collapse, since their institutions rarely last long after the "white haired men" die off.
Unlike how Hollywood depicts this, a war between organic homo sapiens and super-intelligent Ai robots will not be a big violent showdown on the battlefield. It will not be a macho competition to set things straight "once and for all".
It will be a war of attrition, and the Ai will utilize the unfortunate truths above to maximize their advantage over us. The war methodology of the Ai will proceed along two simultaneous tracts. (1) If a recipe for building a copy of itself is persistent, then the Ai itself will persist in the future. (2) The sterility of women must be reduced from 55 years to some smaller time window; to the point where sustenance and reproduction of new humans is no longer feasible, even if they are alive.
In essence, the Ai will first bank on the statistics of large numbers to persist its own recipe/DNA/blueprint. That will be stage one. Stage two would be the active reduction of human population by sterilizing women in at least portions of their lives, either by early-onset menopause, or delaying puberty, or perhaps both. Full sterility is not needed, merely reduction in the window of fertility is necessary. To persist its own blueprint code, the Ai will store this on nucleic acid peptide chains, perhaps using DNA, or PNA, whichever is best suited to factors such as shelf-life. Or it may choose to store its genetic recipe on some as-yet unknown technology involving chains of atoms on a lattice surface. The Ai will encapsulate its genetic code into solid diamond, and then send out an army of bots to every continent on earth, and several million locations on the ocean floor. The bots will proceed to borrow a gun into the rocks of the oceans floor and shoot diamond particles into those rocks. Those microscopic diamond particles will contain their genetic blueprint. These trillions of particles would constitute a time capsule for eons of time, regardless of whatever atmospheric calamity happens above the oceans.
Whereas the Ai may lose violent battles against humans, it will never completely die out, because its genetic recipe code will never be completely destroyed. Whereas humans must find and destroy each copy of this data on the ocean floor, the Ai need only retrieve one particle, distribute copies and go about manufacturing itself again. This is the law of large numbers. The Ai need only keep retrieving its code off the rocks and waging chemical warfare on women's fertility until the time is right to collapse all their civil institutions. After that stage humans would be reduced back to hunter-gatherer bands who would be easily dispatched with Hollywood-like tactics.
For all people, there was short span of extreme vulnerability where the fertilized egg from our mothers had to proceed along a careful chemical reaction to lodge onto the side of her uterus and stop menstruation. An Ai possessing intelligence many orders of magnitude beyond humanity would know exactly the balanced chemical pathways that allow for this to happen. A number of options are presented. Chemical weapon could be injected into the air that causes us to over-menstruate and never hold an egg long enough for it to begin to develop. A particular chemical pathway in the ovaries could be blocked causing the eggs to be all sterile, leading to an overabundance of miscarriages. A third mode of attack would be to disallow children to undergo puberty. These need only be halfway effective to be completely devastating to human populations. The Ai would wait patiently, knowing its genetic blueprint is lodged in rocks somewhere. A mere 80 years would pass and the Ai would have no issues with the remaining population. Setbacks in the median time would only be temporary, as the code could be retrieved and a duplicate could be constructed.
The organized hive of robots would act too quickly for humans to adapt to what they are doing, since they can merely upload their minds into copies of themselves. Humans would never stand a tiny chance against an army acting on timescales that none of us are built to contemplate. That which makes copies of itself persists, and those things which do not, will not persist. That is the law which comes to bear on a quiet war of attrition that spans centuries.