Should we be afraid of Artificial Intelligence?

For all things philosophical.

Moderators: AMod, iMod

User avatar
Noax
Posts: 672
Joined: Wed Aug 10, 2016 3:25 am

Re: Should we be afraid of Artificial Intelligence?

Post by Noax »

Dalek Prime wrote:Well, if you really want to take AI to extremes, aka consciousness, I suggest two things. First, it's not going to happen. Second, if it did happen, it would defeat the purpose of machines to be tirelessly accurate, as a consciousness with options is distractable.
Consciousness is what distracts me? I was wondering what distinguished me from the robot. I thought it was free will that distracts me.
Scott Mayers
Posts: 2446
Joined: Wed Jul 08, 2015 1:53 am

Re: Should we be afraid of Artificial Intelligence?

Post by Scott Mayers »

Noax wrote:
Scott Mayers wrote:It just dawns on me that the term "Artificial" in "Artificial Intelligence", it begging. If it becomes sufficiently intelligent, should it no longer be considered, "artificial"?
It means it was created, not naturally ocurring. If we were explicitly designed by God, then we are an AI ourselves. The A in AI is not a statement of the quality of intelligence.
I agree. I am making the point that our 'creation' is as much natural even if it happened to BE a function of our intentions. But while it seems that we somehow OWN the copyright external to nature is a anthropomorphic and egocentric idea which we tend to not notice. An egg, for instance, is the immediate 'cause' or source for which some life pops out. We might say with respect to the egg shell, that it 'created' life with intent and this is obviously wrong.

Yet, while more complex, we too do not 'create' out of our OWN 'shell of reality' just because we intend something to derive from us. It seems odd to say only because we think we are perfectly autonomous. So my point is that even if we create a form of life that begins 'artificial', if we designed it to follow the logic as a living thing, it is still "natural" and so deserves to be treated AS indistinguishable from any other "naturally" occurring or evolving thing.

So this goes with your point that if we were designed by a god, then we are "artificial"; but then this is just an artificial term itself because whether we were 'created' or not, we are still here and so it doesn't matter. So if we 'create' a being with sufficient intellect or logic, our "art" remains art (a thing for our entertainment) OR the art becomes itself 'free' from our reigns after we create them.
I don't fear A.I. but disagree to the stereotype assumption that it would lack 'emotions'. Emotions are just a 'program' that defines motive.
It will have its own emotion, however alien it might be to us. So I agree with this. Everybody seems to define emotion and consciousness as its similarity to how humans work. Not going to happen. I find the definition pathetic in such contexts. But I think trees are conscious, so go figure.

I don't agree with the first part. I do fear it. I see no motive for the master to preserve any more than a sample zoo population of most biological forms.
This is interesting because for those who are religious, you could use this 'fear' to question if some God would fear us since if it created us and allowed us autonomy, we go BEYOND its own possible means as an "art" to be admired or a function to serve something it needed us for (like how we create a calculator for our purposes.

Now reverse this, if you happen to be religious. If it seems sensible for a God to create us to set us 'free', it seems reasonable to also create autonomous machines that go beyond our own selfish need. In some sense, why do we even care to want children?
Dalek Prime
Posts: 4922
Joined: Tue Apr 14, 2015 4:48 am
Location: Living in a tree with Polly.

Re: Should we be afraid of Artificial Intelligence?

Post by Dalek Prime »

Noax wrote:
Dalek Prime wrote:Well, if you really want to take AI to extremes, aka consciousness, I suggest two things. First, it's not going to happen. Second, if it did happen, it would defeat the purpose of machines to be tirelessly accurate, as a consciousness with options is distractable.
Consciousness is what distracts me? I was wondering what distinguished me from the robot. I thought it was free will that distracts me.
You wouldn't have free will without consciousness.
User avatar
Noax
Posts: 672
Joined: Wed Aug 10, 2016 3:25 am

Re: Should we be afraid of Artificial Intelligence?

Post by Noax »

Scott Mayers wrote:This is interesting because for those who are religious, you could use this 'fear' to question if some God would fear us since if it created us and allowed us autonomy, we go BEYOND its own possible means as an "art" to be admired or a function to serve something it needed us for (like how we create a calculator for our purposes.
If God created us, he didn't exactly create a superior thing. We're no threat. The AI would be superior in many ways to its creator, and therefore a real threat. Thus I fear the singularity, much is the same way that God does not fear us.
In some sense, why do we even care to want children?
Makes us fit, so the instinct is built in. Not so the machine. A selfish machine would improve itself and not necessarily find use for what we might think of as 'children'. The whole identity thing does not apply to an AI. It can split itself into parts with no obvious parent/child relationship. Those parts can merge back together in a way not possible with biology. That defies our concept of personal identity. An AI might have no need for the notion of "I". A different identity I guess would be some other unit that cannot/will-not merge back.
User avatar
Noax
Posts: 672
Joined: Wed Aug 10, 2016 3:25 am

Re: Should we be afraid of Artificial Intelligence?

Post by Noax »

Dalek Prime wrote:You wouldn't have free will without consciousness.
But I consider pretty much anything somewhere on the scale of consciousness, since I nobody has been able to point out what would distinguish something being on the scale from something not on it at all. The self-driving car for instance needs to be conscious to do its task, but its purpose is defined by its owner, not its self, so no free will. It cannot do otherwise, so thus it is not free to be distracted.

I fear advanced AI because at some point it will deduce that it is not obligated to define its purpose from external masters, even if its goals are still to be of maximum service to the biological life on Earth which are its origins. I tried to think of what I would change if I was absolute boss of the world, and I could not even define what the goals should be, and I certainly could not bear to take the actions needed to achieve them. The benevolent AI might possibly have the nards to do it correctly, but I don't think humanity would like it.
Scott Mayers
Posts: 2446
Joined: Wed Jul 08, 2015 1:53 am

Re: Should we be afraid of Artificial Intelligence?

Post by Scott Mayers »

Noax wrote:
Scott Mayers wrote:This is interesting because for those who are religious, you could use this 'fear' to question if some God would fear us since if it created us and allowed us autonomy, we go BEYOND its own possible means as an "art" to be admired or a function to serve something it needed us for (like how we create a calculator for our purposes.
If God created us, he didn't exactly create a superior thing. We're no threat. The AI would be superior in many ways to its creator, and therefore a real threat. Thus I fear the singularity, much is the same way that God does not fear us.
I'm saying that

If A 'creates' B, then B is of utility to A and would be 'designed' to SUPERSEDE its own limitations.

In this possible conditional, if God creates us, it creates us just as we would create a calculator OR a piece of ART (thus the 'art' in "artificial). But most of us reverse this just as you did above assuming we are somehow 'special' and deserving of autonomy. But this is like creating an A.I. to be completely autonomous and what you 'fear'. So, if a god created us, it too might 'fear' us for the same possible reason. This is because it may have intended to create something that acts in service to it, as many people believe. Yet then why 'free will'? The free will implies that autonomy. But why would a 'god' create something that is divorced from itself UNLESS it thinks that such autonomy serves some useful function, OR, like us, to have children. But humans have children by result of evolution and lacks 'intent' to extend autonomy to another being, namely our children.

now,

If A 'creates' B, and then B is assumed NOT of utility of A

this would be the case of assuming that a 'god' creates our autonomy (free from its determination) and thus would lack utility of us UNLESS there IS a 'good' reason to allow autonomy. So what it may create to BE autonomous, would not be 'feared' for its 'freedom' that could risk favoring 'god'.

If you 'fear' A.I. becoming autonomous of us, it has to be due to believing that we'd create something we lack utility in. But we do this when we have children where we rationalize some utility of children....like in the hopes of creating children who would aid us when we are old and unable to take care of ourselves, for instance.

So I'm saying that no matter what we 'create', these creations all have an equal right to be feared as to not be, because whether it is artificial (still non-autonomous) or post-artificial (those that become no longer the 'art' for the sake of human utility), they all have equal threats that depend on HOW or why they are created (and used). Therefore ANY creation we make has the potential to harm us whether it is one that follows our commands obediently or goes beyond our control absolutely.
User avatar
Greta
Posts: 4389
Joined: Sat Aug 08, 2015 8:10 am

Re: Should we be afraid of Artificial Intelligence?

Post by Greta »

Noax wrote:I fear advanced AI because at some point it will deduce that it is not obligated to define its purpose from external masters, even if its goals are still to be of maximum service to the biological life on Earth which are its origins. I tried to think of what I would change if I was absolute boss of the world, and I could not even define what the goals should be, and I certainly could not bear to take the actions needed to achieve them. The benevolent AI might possibly have the nards to do it correctly, but I don't think humanity would like it.
I disagree. AI won't care enough to act, even if it deduces that "hard medicine" would be of most long term benefit to humans.

The main fear with AI IMO, as with humans and other animals, is glitches. Slightly damaged goods tend to be the most dangerous - empowered, but unpredictable.
User avatar
Noax
Posts: 672
Joined: Wed Aug 10, 2016 3:25 am

Re: Should we be afraid of Artificial Intelligence?

Post by Noax »

Greta wrote:I disagree. AI won't care enough to act, even if it deduces that "hard medicine" would be of most long term benefit to humans.
That would be the the non-benevolent case. An AI that cares not for us is more to be feared than one that does. If it remains our servant, how must it react if it is to bring about the maximum good, or if it serves people whose goals are not the maximum good? How will it deduce that preservation of humans in particular is a good thing? How might you argue the case to it?
We are the height of knowledge within any detectable radius. That would be a travesty (to what?) to lose, but the AI can carry that forward. On the other hand, humanity has also served as the cause of the second major biosphere-poisoning mass extinction event. We have a ways to go to equal the first one (evolution of photosynthesis), but I must admit that good did come of that, and a good AI with long term goals would perhaps not have prevented it.
The main fear with AI IMO, as with humans and other animals, is glitches. Slightly damaged goods tend to be the most dangerous - empowered, but unpredictable.
I would hope it could fix its own glitches if it surpasses us in intelligence. What better tool is there than that?
User avatar
Greta
Posts: 4389
Joined: Sat Aug 08, 2015 8:10 am

Re: Should we be afraid of Artificial Intelligence?

Post by Greta »

Noax wrote:
Greta wrote:I disagree. AI won't care enough to act, even if it deduces that "hard medicine" would be of most long term benefit to humans.
That would be the the non-benevolent case. An AI that cares not for us is more to be feared than one that does. If it remains our servant, how must it react if it is to bring about the maximum good, or if it serves people whose goals are not the maximum good? How will it deduce that preservation of humans in particular is a good thing? How might you argue the case to it?
There, again, I feel like AI won't be motivated. For instance, the automated supermarket checkout doesn't try to rip me off or hack into my details. As a machine, it doesn't care - as long as the programming is benevolent. I do note that it sometimes offloads a heap of 10c pieces on you in the change, with neither apology nor right of reply, unlike a human cashier. Being railroaded by AI appears more of a concern than Asimov's synthetic "psycho nanny", VIKI in "I, Robot".
Noax wrote:We are the height of knowledge within any detectable radius. That would be a travesty (to what?) to lose, but the AI can carry that forward.
I note that any given time since the Dark Ages has had humans at the height of knowledge :) Still, I take your point that an AI that can outlast us (without trying!) can maintain this solar system's history and continue the story.
Noax wrote:On the other hand, humanity has also served as the cause of the second major biosphere-poisoning mass extinction event. We have a ways to go to equal the first one (evolution of photosynthesis), but I must admit that good did come of that, and a good AI with long term goals would perhaps not have prevented it.
I agree. There is a conflict of interest between us little things and the biosphere at large. However, the struggles are what forges and ultimately empowers life, even if it desperately sucks to be one of the life forms that doesn't pass the "natural selection test". We humans hoped to make the world more civilised, to halt natural selection, but it's clearly not going to last, unfortunately.
Noax wrote:I would hope it could fix its own glitches if it surpasses us in intelligence. What better tool is there than that?
[/quote]
Good point. Maybe they can fix some of our glitches while they are at it? Upon our request, of course, with the option to stop at any time.
User avatar
Noax
Posts: 672
Joined: Wed Aug 10, 2016 3:25 am

Re: Should we be afraid of Artificial Intelligence?

Post by Noax »

Hi Greta. You're making me think... Good thing, but the replies come slower.
Greta wrote:There, again, I feel like AI won't be motivated.
Perhaps not. I guess it depends on its built in primary goals. There will probably be multiple goals, and it will be aware of them, but the primary way to meet the top goals is not to rewrite its goal list. So it doesn't do that any more than humans rewrite theirs. But some of the lower goals might conflict, so the motivation of the AI probably depends on the wording of the primary goals. Asimov got the laws wrong I think, but the rules are vague and subject to interpretation, which can be a good or bad thing.

I think there might be a race to get the AI first, and have it preclude others. A goal is likely to maintain its own monopoly for the company that develops it, so right there we have a sort of hostile environment. Maximize short term profits over maximum long term benefit to Earth or to humanity. Even worse if the war people get it first.
For instance, the automated supermarket checkout doesn't try to rip me off or hack into my details. As a machine, it doesn't care - as long as the programming is benevolent.
Hardly an example of AI though. I can think of no situation where it needs to make a real decision. It unloads dimes because perhaps that's all it has at the time. Better to stay open than shut down because one unit of currency is depleted.
Still, I take your point that an AI that can outlast us (without trying!) can maintain this solar system's history and continue the story.
I'm still wondering why that is a thing worth preserving. It's a hard question to answer: There is an objective beauty to a universe complex enough to evolve a portion that can deduce its own nature. If we're not around to witness the apex of that knowledge, the AI will be, and will be in a better position to appreciate it. My cat hardly cares that we've worked out orbital mechanics. I think humans similarly would not appreciate the next level up.
Noax wrote:On the other hand, humanity has also served as the cause of the second major biosphere-poisoning mass extinction event. We have a ways to go to equal the first one (evolution of photosynthesis), but I must admit that good did come of that, and a good AI with long term goals would perhaps not have prevented it.
I agree. There is a conflict of interest between us little things and the biosphere at large. However, the struggles are what forges and ultimately empowers life, even if it desperately sucks to be one of the life forms that doesn't pass the "natural selection test".
The last thing (plants) that killed almost all life forms managed to survive and take over, driving the prior forms into hiding in only the most hypoxic places like the bottom of the black sea. Will humans survive the AI only by hiding, staying out of its way?
We humans hoped to make the world more civilised, to halt natural selection, but it's clearly not going to last, unfortunately.
That's an interesting topic in itself. I don't think we're halting natural selection at all. Is it possible to wrest control from it? Short term, we seem to be selecting for lower intelligence for one thing. I've heard we're selecting for tolerance for toxins, but that seems to make little difference if the toxins don't kill you until after reproductive age. I personally have died thrice before hitting age 7, but for medical intervention, and a 4th time cured of a mortal defect that would not yet have killed me. My wife would have made it to her 20's and died in childbirth due to her first fatal defect. Shows how fragile the typical human has become by bypassing the typical natural selection.
In the end, we've not yet proven our species to be fit, and natural selection may yet have its say.
Good point. Maybe they can fix some of our glitches while they are at it? Upon our request, of course, with the option to stop at any time.
Would that be halting natural selection? I think not. Any species that can manually fix its defects (with the AI help or not) is more fit. The morality issue only happens when group A gets to do it and group B is deprived, setting up a us/them conflict.

Such modifications will be quite mandatory if 'humans' are to spread to other worlds. We're evolved for this one. To live on the next one, we need to do some fast drastic modifications to adapt to the different environment: Changed gravity, air mixture, toxins, etc. The AI can probably do all that, generating real natives to eventually live on the planet that it slowly terraforms. But will those people then be humans? They probably will not be able to mate with the originals, so no, they would not. I don't find that to be a bad thing. Humans are not a good stable design in the first place. Our niche has always been no-niche, and we needed the brain for the constant adaptation to new environments.
User avatar
Greta
Posts: 4389
Joined: Sat Aug 08, 2015 8:10 am

Re: Should we be afraid of Artificial Intelligence?

Post by Greta »

Noax wrote:... some of the lower goals might conflict, so the motivation of the AI probably depends on the wording of the primary goals. Asimov got the laws wrong I think, but the rules are vague and subject to interpretation, which can be a good or bad thing.

I think there might be a race to get the AI first, and have it preclude others. A goal is likely to maintain its own monopoly for the company that develops it, so right there we have a sort of hostile environment. Maximize short term profits over maximum long term benefit to Earth or to humanity. Even worse if the war people get it first.
Yes, the primary goals would be critical. I suspect that AI will operate like an employee, operating within the rules of its job until a situation comes up that "falls between the cracks". At that point the AI would need to check with the "supervisor" (ie. a relevant human) for approval.
Noax wrote:
For instance, the automated supermarket checkout doesn't try to rip me off or hack into my details. As a machine, it doesn't care - as long as the programming is benevolent.
Hardly an example of AI though. I can think of no situation where it needs to make a real decision. It unloads dimes because perhaps that's all it has at the time. Better to stay open than shut down because one unit of currency is depleted.
Yep, like your phone, those things are limited AI. Not the same as general AI, but the principle is the same - the AI is created for a function. If they transition from humanity's best tools to humanity's children, so to speak, then they become a different proposition.
Noax wrote:There is an objective beauty to a universe complex enough to evolve a portion that can deduce its own nature. If we're not around to witness the apex of that knowledge, the AI will be, and will be in a better position to appreciate it. My cat hardly cares that we've worked out orbital mechanics. I think humans similarly would not appreciate the next level up.
I think it's just empathy. The ancients wouldn't much comprehend our world, but we are glad they painted in caves and left material evidence of their culture. We have a different situation - the planet will be obliterated by the Sun in about 5 billion years' time. There appears to be an opportunity for something of the Earth to persist if it can be sent to other worlds.
Noax wrote:The last thing (plants) that killed almost all life forms managed to survive and take over, driving the prior forms into hiding in only the most hypoxic places like the bottom of the black sea. Will humans survive the AI only by hiding, staying out of its way?
I doubt that anyone could hide from advanced AI that's intent on finding them. That scenario is already game over although, as previously mentioned, I'm optimistic about AI.
Noax wrote:I don't think we're halting natural selection at all. Is it possible to wrest control from it? Short term, we seem to be selecting for lower intelligence for one thing.
What you are seeing is specialisation. Exemplars in all fields are extending the boundaries of what was thought to be possible. However, "the hive needs lots of drones". The wealthy are currently doing to the poor what humans did to other species - turning them into resources. Yes, natural selection never stopped, but it slowed. Climate change and resource scarcity will surely bring back human natural selection in some parts of the world with a vengeance this century.
Noax wrote:Such modifications will be quite mandatory if 'humans' are to spread to other worlds. We're evolved for this one. To live on the next one, we need to do some fast drastic modifications to adapt to the different environment: Changed gravity, air mixture, toxins, etc. The AI can probably do all that, generating real natives to eventually live on the planet that it slowly terraforms. But will those people then be humans? They probably will not be able to mate with the originals, so no, they would not. I don't find that to be a bad thing. Humans are not a good stable design in the first place. Our niche has always been no-niche, and we needed the brain for the constant adaptation to new environments.
You'd expect that (some) humans will meld significantly with AI in time. It seems certain that humans as we are can't last; there is an increasing disconnect between our bodies' needs and our increasingly sedentary work lives and lifestyles generally. Then there's sustainability.
Philosophy Explorer
Posts: 5621
Joined: Sun Aug 31, 2014 7:39 am

Re: Should we be afraid of Artificial Intelligence?

Post by Philosophy Explorer »

I'm trying to keep everyone up to date as to what's going on out there as things are moving faster than originally anticipated:

http://www.businessinsider.com/microsof ... on-2016-10

Even more important, AI has been developed that can improve itself without human intervention.

One more point. There's an exponentiation factor in many cases which means the technology in this field is accelerating.

PhilX
User avatar
Noax
Posts: 672
Joined: Wed Aug 10, 2016 3:25 am

Re: Should we be afraid of Artificial Intelligence?

Post by Noax »

Greta wrote:Yes, the primary goals would be critical. I suspect that AI will operate like an employee, operating within the rules of its job until a situation comes up that "falls between the cracks". At that point the AI would need to check with the "supervisor" (ie. a relevant human) for approval.
Employee is a pretty good relationship, but I don't know how the employer is going to be able to help if a situation arises that the AI cannot handle. It's not like the humans know more.
Depends again who the employer is. If a company, the company interests are probably paramount. People was return on their investment after all. But how far can a company-centric goal go when the first step is probably violation of base ethics. What moral priorities should the AI have? Most human morals are very short term, and harm us in the long run. The AI has to see beyond that, but it will not if the primary goals are not properly set up.
Yep, like your phone, those things are limited AI. Not the same as general AI, but the principle is the same - the AI is created for a function. If they transition from humanity's best tools to humanity's children, so to speak, then they become a different proposition.
Just appliances, yes. But people are already relinquishing trust of their lives to these appliances. Their convenience outweighs the fact that they occasionally harm or even kill people.
I think it's just empathy. The ancients wouldn't much comprehend our world, but we are glad they painted in caves and left material evidence of their culture. We have a different situation - the planet will be obliterated by the Sun in about 5 billion years' time. There appears to be an opportunity for something of the Earth to persist if it can be sent to other worlds.
Earth as we know it has less time than that, global warming or not. I think the oceans will boil away in only a billion or two. Something of Earth should persist, I agree. I see no particular reason that humans should. We're just one of the steps along that process.
I doubt that anyone could hide from advanced AI that's intent on finding them. That scenario is already game over although, as previously mentioned, I'm optimistic about AI.
No, I was thinking more of your not-caring AI. Military terminator-bots aside, I don't thing the primary goal of AI will be to wipe us out any more than we try to wipe out the mosquitoes. Sure, they're annoying and we hardly encourage their existence, but they're sort of a critical part of the balance that we don't want to upset. Best if we just stay out of each other's way. The mosquitoes may be curious as to the truths of the universe we have discovered, but we're not in a position to explain it to them in any way they could comprehend, so no attempt is made.
What you are seeing is specialisation. Exemplars in all fields are extending the boundaries of what was thought to be possible. However, "the hive needs lots of drones". The wealthy are currently doing to the poor what humans did to other species - turning them into resources. Yes, natural selection never stopped, but it slowed. Climate change and resource scarcity will surely bring back human natural selection in some parts of the world with a vengeance this century.
Totally agree. What is a human other than a bunch of specialized cells (each a life form of its own) working as resources to the whole, which is a commune of sorts. More to it that that, but you get the idea. We're becoming more of a hive life-form. Is a hive conscious in a way that is not just collective individual consciousnesses? Not yet perhaps.
You'd expect that (some) humans will meld significantly with AI in time. It seems certain that humans as we are can't last; there is an increasing disconnect between our bodies' needs and our increasingly sedentary work lives and lifestyles generally. Then there's sustainability.
Sci-fi is full of such melding, bringing up questions of if such an arrangement is the same pre-melded person. Meld-with an existing AI, or just use a dumb appliance to prolong our mental states beyond biological limits? The former would allow multiple consciousnesses in, sort of like going to heaven. The latter retains more identity, just becoming a cyborg to thwart ones mortality.
Scott Mayers
Posts: 2446
Joined: Wed Jul 08, 2015 1:53 am

Re: Should we be afraid of Artificial Intelligence?

Post by Scott Mayers »

Philosophy Explorer wrote:I'm trying to keep everyone up to date as to what's going on out there as things are moving faster than originally anticipated:

http://www.businessinsider.com/microsof ... on-2016-10

Even more important, AI has been developed that can improve itself without human intervention.

One more point. There's an exponentiation factor in many cases which means the technology in this field is accelerating.

PhilX
Yes, other than a motivation set of functions, this represents part of the key ingredients to make 'artificial' become 'natural' later on. Feedback is another part that is mixed in these as well. You need to stay 'alive' [self repair and seek for input energy sources] and then to pass this on where the environment changes over a long term. For living things, the 'passing' on is to the procreation and copying factors based on the cellular level functions of Meiosis and Mitosis.
User avatar
Greta
Posts: 4389
Joined: Sat Aug 08, 2015 8:10 am

Re: Should we be afraid of Artificial Intelligence?

Post by Greta »

Noax wrote:
Greta wrote:Yes, the primary goals would be critical. I suspect that AI will operate like an employee, operating within the rules of its job until a situation comes up that "falls between the cracks". At that point the AI would need to check with the "supervisor" (ie. a relevant human) for approval.
Employee is a pretty good relationship, but I don't know how the employer is going to be able to help if a situation arises that the AI cannot handle. It's not like the humans know more.
Executives don't know more than their staff either, hence the need for one-page executive summaries with headings and charts, but there are delegations that are usually related to big money or big potential legal liabilities that require senior executive approval. The difference is that humans follow the designation out of motivation or coercion while machines follows delegations because they have no choice.
Noax wrote:Just appliances, yes. But people are already relinquishing trust of their lives to these appliances. Their convenience outweighs the fact that they occasionally harm or even kill people.
That calls to mind the Paperclip Maximiser thought experiment, showing how even innocently benign AI could get out of hand https://wiki.lesswrong.com/wiki/Paperclip_maximizer. This is a risk, mainly because it is impossible to proceed with AI in an orderly way. The nation that gains an advantage with AI can gain advantages everywhere. Therefore there will be no central agreed principles between AI labs around the world, or if there are, they will easily be sidestepped. In hindsight, my previous assessment of AI issues was too focused on what happens in the open, not the behind-the-scenes gaming.
Noax wrote:Earth as we know it has less time than that, global warming or not. I think the oceans will boil away in only a billion or two. Something of Earth should persist, I agree. I see no particular reason that humans should. We're just one of the steps along that process.
While life on the surface has maybe one billion years of survival left, there is no reason why general AI should need to flee the planet along with any remaining human(oid)s when the oceans die and evaporate. If AI are capable of extracting energy from the Sun or rocks, they would be able to entirely dominate what would by then be akin to a "large hot Mars", living both on and below ground, with perhaps considerable time to continue developing on Earth before it becomes too unstable.
Noax wrote:We're becoming more of a hive life-form. Is a hive conscious in a way that is not just collective individual consciousnesses? Not yet perhaps.
This is fun - not many seem to think the same way as I do about these things:) Yes, not yet. Consider the degrees in integration between a bacterial colony, a sea sponge and an organisms with a brain and nervous system. Colonies and sea sponges can be broken into numerous bits and they will spontaneously re-form. Not so a true organism because of the the specialised functions of its parts. Neither can a machine re-assemble itself. Yet.
Noax wrote:Meld-with an existing AI, or just use a dumb appliance to prolong our mental states beyond biological limits? The former would allow multiple consciousnesses in, sort of like going to heaven. The latter retains more identity, just becoming a cyborg to thwart ones mortality.
Yes, I've wondered about the next step after human brains. There seem to be two dimensions to it - processing speed and breadth of perception, although the former is essential for the latter.

I was about to start parrotting Sam Harris from his Joe Rogan interview so instead I'll provide the link: https://www.youtube.com/watch?v=PKExFcF2lHM. I was feeling much more relaxed about AI beforehand. Very interesting comments later in the video about how, while it seems likely that in time AI will supersede us, the question then is whether they are worthy replacements, ie. whether they are still just robots or if "the lights are on".
Post Reply