Hi Greta. You're making me think... Good thing, but the replies come slower.
Greta wrote:There, again, I feel like AI won't be motivated.
Perhaps not. I guess it depends on its built in primary goals. There will probably be multiple goals, and it will be aware of them, but the primary way to meet the top goals is not to rewrite its goal list. So it doesn't do that any more than humans rewrite theirs. But some of the lower goals might conflict, so the motivation of the AI probably depends on the wording of the primary goals. Asimov got the laws wrong I think, but the rules are vague and subject to interpretation, which can be a good or bad thing.
I think there might be a race to get the AI first, and have it preclude others. A goal is likely to maintain its own monopoly for the company that develops it, so right there we have a sort of hostile environment. Maximize short term profits over maximum long term benefit to Earth or to humanity. Even worse if the war people get it first.
For instance, the automated supermarket checkout doesn't try to rip me off or hack into my details. As a machine, it doesn't care - as long as the programming is benevolent.
Hardly an example of AI though. I can think of no situation where it needs to make a real decision. It unloads dimes because perhaps that's all it has at the time. Better to stay open than shut down because one unit of currency is depleted.
Still, I take your point that an AI that can outlast us (without trying!) can maintain this solar system's history and continue the story.
I'm still wondering why that is a thing worth preserving. It's a hard question to answer: There is an objective beauty to a universe complex enough to evolve a portion that can deduce its own nature. If we're not around to witness the apex of that knowledge, the AI will be, and will be in a better position to appreciate it. My cat hardly cares that we've worked out orbital mechanics. I think humans similarly would not appreciate the next level up.
Noax wrote:On the other hand, humanity has also served as the cause of the second major biosphere-poisoning mass extinction event. We have a ways to go to equal the first one (evolution of photosynthesis), but I must admit that good did come of that, and a good AI with long term goals would perhaps not have prevented it.
I agree. There is a conflict of interest between us little things and the biosphere at large. However, the struggles are what forges and ultimately empowers life, even if it desperately sucks to be one of the life forms that doesn't pass the "natural selection test".
The last thing (plants) that killed almost all life forms managed to survive and take over, driving the prior forms into hiding in only the most hypoxic places like the bottom of the black sea. Will humans survive the AI only by hiding, staying out of its way?
We humans hoped to make the world more civilised, to halt natural selection, but it's clearly not going to last, unfortunately.
That's an interesting topic in itself. I don't think we're halting natural selection at all. Is it possible to wrest control from it? Short term, we seem to be selecting for lower intelligence for one thing. I've heard we're selecting for tolerance for toxins, but that seems to make little difference if the toxins don't kill you until after reproductive age. I personally have died thrice before hitting age 7, but for medical intervention, and a 4th time cured of a mortal defect that would not yet have killed me. My wife would have made it to her 20's and died in childbirth due to her first fatal defect. Shows how fragile the typical human has become by bypassing the typical natural selection.
In the end, we've not yet proven our species to be fit, and natural selection may yet have its say.
Good point. Maybe they can fix some of our glitches while they are at it? Upon our request, of course, with the option to stop at any time.
Would that be halting natural selection? I think not. Any species that can manually fix its defects (with the AI help or not) is more fit. The morality issue only happens when group A gets to do it and group B is deprived, setting up a us/them conflict.
Such modifications will be quite mandatory if 'humans' are to spread to other worlds. We're evolved for this one. To live on the next one, we need to do some fast drastic modifications to adapt to the different environment: Changed gravity, air mixture, toxins, etc. The AI can probably do all that, generating real natives to eventually live on the planet that it slowly terraforms. But will those people then be humans? They probably will not be able to mate with the originals, so no, they would not. I don't find that to be a bad thing. Humans are not a good stable design in the first place. Our niche has always been no-niche, and we needed the brain for the constant adaptation to new environments.