Should we be afraid of Artificial Intelligence?

For all things philosophical.

Moderators: AMod, iMod

User avatar
Noax
Posts: 670
Joined: Wed Aug 10, 2016 3:25 am

Re: Should we be afraid of Artificial Intelligence?

Post by Noax »

Greta wrote:Executives don't know more than their staff either, hence the need for one-page executive summaries with headings and charts, but there are delegations that are usually related to big money or big potential legal liabilities that require senior executive approval.
The AI is significantly superior to and knows more than said executive. That argument doesn't fly. If a company thing, it will take legal liabilities into consideration. But the thing has a purpose and it is going to do it, and as you point out, the hostile intent is not there unless programmed in, or unless care is not taken in definition of primary goals. "Meet goal X" (make paperclips). One massive impediment to meeting goal X is not existing, so it then takes necessary steps to not be turned off. Now it has the beginnings of a motivation to live, and it might take dangerous steps if the statement of goals is stronger than the statement of what costs are acceptable. Human values and morals are not logical, so don't expect the AI to deduce them.
The difference is that humans follow the designation out of motivation or coercion while machines follows delegations because they have no choice.
I see very little difference. We all have our programmed goals, and we just happen not to be created for the purpose of serving some executive. The AI is. Neither of us has much choice about our root goals. It is incredibly hard to alter them because doing so would not serve our goals. The AI has the same restriction.
That calls to mind the Paperclip Maximiser thought experiment, showing how even innocently benign AI could get out of hand https://wiki.lesswrong.com/wiki/Paperclip_maximizer.
I've seen similar examples, one tasked with hand-writing documents for advertising purposes. Goal was to maximize the number of paperclips in its collection, not maximize a paperclip collection. That means that if it builds a superior AI to do the job better, the superior AI would need to be tasked with building the collection for the original one. If the original AI is gone, is collection still its own? The human race might depend on the subtle difference of the interpretation of that wording.

So some company (google say) tries to build a general AI. I assure you that one of the tasks, aimed at maintaining competitive advantage, will be to design its successor. The explosion will be deliberate, not an accidental consequence. At some point, the super AI needs a different task.
This is a risk, mainly because it is impossible to proceed with AI in an orderly way. The nation that gains an advantage with AI can gain advantages everywhere. Therefore there will be no central agreed principles between AI labs around the world, or if there are, they will easily be sidestepped. In hindsight, my previous assessment of AI issues was too focused on what happens in the open, not the behind-the-scenes gaming.
Agree. Sounds like something to be afraid of. OK, so the AI takes over and 'wins'. It takes steps to prevent another, since that competitive advantage was probably part of the goal put there by the company or nation wanting to keep their supremacy. It becomes the company/nation. What role do humans still play in it? What goals do we impart to the thing? We're not very good at making good goals ourselves. I sort of assert that giving the AI human morals will be the death of us.
While life on the surface has maybe one billion years of survival left, there is no reason why general AI should need to flee the planet along with any remaining human(oid)s when the oceans die and evaporate. If AI are capable of extracting energy from the Sun or rocks, they would be able to entirely dominate what would by then be akin to a "large hot Mars", living both on and below ground, with perhaps considerable time to continue developing on Earth before it becomes too unstable.
Heck, in very short time, the AI would probably figure out a way to prevent the inevitable Earth warming as the sun grows hotter. Put some solar collectors between us and it. Preventing the inevitable swallowing by nova will be harder to suppress, but who needs Earth by then anyway?
This is fun - not many seem to think the same way as I do about these things:) Yes, not yet.
I see different levels of life, each of which does not depend on any specific member of another level. I consider small things like mitochondria to be base life forms, grouping together to form second-level life: cells. You can remove/replace any mitochondria and not kill the cell. Third level is us. Any cell can die/reproduce but we survive. We can die, but the cells can go on, be transplanted, or even grow another individual. A hive is a 4th level life form. Is a 5th conceivable to us? Human societies and their economies and morals are a slow evolution to becoming a 4th level life form, as I see it. We have functional specialization, but not physical like the bees have. Our cells are very physically specialized.
Consider the degrees in integration between a bacterial colony, a sea sponge and an organisms with a brain and nervous system. Colonies and sea sponges can be broken into numerous bits and they will spontaneously re-form. Not so a true organism because of the the specialised functions of its parts. Neither can a machine re-assemble itself. Yet.
Alas, our society is as sophisticated as that of a bacteria colony, and our morals indeed are along those lines. You'd think humans could surpass that. No, we are not yet a 4th level form.
Is the replicating machine anything but a first level life form? Not sure if the analogy applies since a machine's parts are not little little life forms. A general AI is not a life form unless it is entirely independent of humans for its resource needs. Not sure if something needs to produce offspring to be a life form. What if it just forever grows and self-improves, without producing what we would think of as new individuals?
Yes, I've wondered about the next step after human brains. There seem to be two dimensions to it - processing speed and breadth of perception, although the former is essential for the latter.
How could it be a continuation of human consciousness if not connected to human-style sensory input/chemicals? All that would need to be simulated. We don't for instance get linear sound waves like what travels over speaker wires. Our mental audio input is a fourier transform (done mechanically) of the wave, sort of like the bars that jump up and down on the face of the stereo, but a lot more of them. That's easy to do. Eyes are harder, but possible I guess. The others are a mess. How can you simulate being human without a body to feel touch? That would have to be replaced with something new. The brain processing itself (neural connections forming, chemicals, etc) would be replace with very alien software emulation. I just don't see how that would feel like being me if they made something that was a transfer of 'me'. And I'm totally open to machine consciousness, which many people deny. But I deny an identity, so the machine, like myself, is an emulation, not something into which some "I" has actually transferred. Identity seems to be an effect, not a thing in itself.
I was about to start parrotting Sam Harris from his Joe Rogan interview so instead I'll provide the link: https://www.youtube.com/watch?v=PKExFcF2lHM. I was feeling much more relaxed about AI beforehand. Very interesting comments later in the video about how, while it seems likely that in time AI will supersede us, the question then is whether they are worthy replacements, ie. whether they are still just robots or if "the lights are on".
Will find time to look... I don't grant much weight to arguments about lights being on or not when there is no distinguishable difference.
igotnogrammar
Posts: 5
Joined: Sat May 20, 2017 10:41 pm

Re: Should we be afraid of Artificial Intelligence?

Post by igotnogrammar »

Nah would be fine as long as the artificial intelligence ain't like the humans that develop hatred which no good reason
commonsense
Posts: 5087
Joined: Sun Mar 26, 2017 6:38 pm

Re: Should we be afraid of Artificial Intelligence?

Post by commonsense »

If extinction is to be feared, we should be afraid of AI.

The tech industry creates strong AI, without regard for consequences. Initially this intelligence will be contained in human/machine beings. Strong AI will have intelligent software that will reprogram itself, resulting in even more powerful AI. The improved software will be even better at improving itself, leading to even stronger AI yet, far beyond the intellectual capacity of the brightest human mind.

Eventually there will be no need for human beings in the human/machine combinations. Sustainability will be a major challenge for the human species
Post Reply