How to minimize the danger of AGI (artificial generalized intelligence)

Abortion, euthanasia, genetic engineering, Just War theory and other such hot topics.

Moderators: AMod, iMod

Post Reply
prof
Posts: 1076
Joined: Wed Jul 11, 2012 1:57 am

How to minimize the danger of AGI (artificial generalized intelligence)

Post by prof »

How to minimize the danger of AGI (artificial generalized intelligence)

There is a danger of super-intelligent machine falling into the hands of a tyrannical government.

This is in regard to an artificial-intelligence computer which has programmed into it that it is to produce a machine that is even more-intelligent than itself, and is instructed to pass that property on to further generations.

I suggest that such a learning machine also be programmed to empower people with the capacity to make practical choices that they can implement; and to make decisions that allow people to make even more choices.

The idea I am proposing is to also program the super-intelligent computers to provide the tools that facilitate the implementing of these choices giving people more options in life for human betterment.

As to what I mean by “human betterment” see this book which tells about a new ethical theory now being developed, a theory capable of generating a Science of Ethics. The evidence is there. The theory is powerful. The results give us a better world:
https://www.amazon.com/LIVING-SUCCESSFU ... B01NBKS42C

Your reviews?
Comments? Questions?
User avatar
TimTimothy
Posts: 25
Joined: Wed Nov 08, 2017 8:30 pm

Re: How to minimize the danger of AGI (artificial generalized intelligence)

Post by TimTimothy »

I like the direction of your thoughts.

I've been reading a lot about how rooted "intelligence" is in our biological substrate. For a long time, I thought that "I" was substrate independent. That you could upload my being into a silicone computer and I'd still be me. I no longer believe that.

The Book Behave by Robert Sapolsky has had a profound effect on my thinking (along with several others).

I've had some very up close and personal dealings with a sociopath. I've come to view sociopaths as almost a different species than me. The manner in which this person's mind operated was wholly different than my own. And there's strong evidence that sociopaths in society are a real danger that we've yet to really acknowledge. We are on the verge of creating super intelligent machines that are functionally sociopathic. Intelligence absent values are a dangerous thing.

So, I like your idea of encoding the safeguarding of human values as a primary requirement. The trick, though, is how to do that in a silicone based machine.
Post Reply