Page 4 of 6

Re: Should we be afraid of Artificial Intelligence?

Posted: Thu Oct 13, 2016 6:25 am
by Philosophy Explorer
Scott Mayers wrote:
Hobbes' Choice wrote:The ultimate weapon against AI

Image
The first 'stages' of A.I. ARE occurring now. They are bots that are tested and likely join us here on forums. The evolution requires large capacity access and what better source is the Internet? [I think I might know who one is! :wink:]

It is still too far away to be concerned about 'animal' forms of A.I. But this too is being started through Uber as they advance their driverless cars. I believe Google too is among experimenters of these. They can then combine the 'brain' of the Internet (cloud servers, torrenting) and the freedom of these cars as the recent trend. Siri (is that spelling correct?) and other computer host are included.

Edit: with that plug by the way, did you know that the nature of polarization is potentially problematic? It technically serves to be sure that all common appliances plug in the same direction when they have open metal containers grounded. This is because the out-of-phase situation occurs if you have two such devices plugged with opposing polarity.

BUT, you could also 'fix' this by having a plug with that third ground instead. What I was thinking is how this polarity factor can be utilized to communicate through the power lines by riding on the AC. with a polarized electronic device and the right wiring, you can use those devices remotely. (I'm guessing this has been utilized too!).
You can correct me if I'm wrong Scott. Isn't it true with a plugin that one prong is larger than the other to make sure the device is plugged in correctly?

PhilX

Re: Should we be afraid of Artificial Intelligence?

Posted: Thu Oct 13, 2016 6:37 am
by Scott Mayers
Philosophy Explorer wrote:
Scott Mayers wrote:
Hobbes' Choice wrote:The ultimate weapon against AI

Image
The first 'stages' of A.I. ARE occurring now. They are bots that are tested and likely join us here on forums. The evolution requires large capacity access and what better source is the Internet? [I think I might know who one is! :wink:]

It is still too far away to be concerned about 'animal' forms of A.I. But this too is being started through Uber as they advance their driverless cars. I believe Google too is among experimenters of these. They can then combine the 'brain' of the Internet (cloud servers, torrenting) and the freedom of these cars as the recent trend. Siri (is that spelling correct?) and other computer host are included.

Edit: with that plug by the way, did you know that the nature of polarization is potentially problematic? It technically serves to be sure that all common appliances plug in the same direction when they have open metal containers grounded. This is because the out-of-phase situation occurs if you have two such devices plugged with opposing polarity.

BUT, you could also 'fix' this by having a plug with that third ground instead. What I was thinking is how this polarity factor can be utilized to communicate through the power lines by riding on the AC. with a polarized electronic device and the right wiring, you can use those devices remotely. (I'm guessing this has been utilized too!).
You can correct me if I'm wrong Scott. Isn't it true with a plugin that one prong is larger than the other to make sure the device is plugged in correctly?

PhilX
Yes, that is what I am referring to. The 'ground' third actually was what was intended for that when the ground acts as an emergency flow (a fuse option that shorts to that ground.) But since the current is cyclic, it goes in one direction then the other. If you have two identical toasters without that 'polar' prong to direct it, plugging them oppositely makes the metal grounding of the toasters act as opposite charge. So you just have to touch them both and ....ZAP!

Re: Should we be afraid of Artificial Intelligence?

Posted: Thu Oct 13, 2016 5:22 pm
by Philosophy Explorer
What's the difference between artificial intelligence and machine learning? This article helps explain:

http://mobile.datamation.com/data-cente ... rence.html

PhilX

Re: Should we be afraid of Artificial Intelligence?

Posted: Thu Oct 13, 2016 10:11 pm
by Hobbes' Choice
Scott Mayers wrote:
Hobbes' Choice wrote:The ultimate weapon against AI

Image
The first 'stages' of A.I. ARE occurring now. They are bots that are tested and likely join us here on forums. The evolution requires large capacity access and what better source is the Internet? [I think I might know who one is! :wink:]

It is still too far away to be concerned about 'animal' forms of A.I. But this too is being started through Uber as they advance their driverless cars. I believe Google too is among experimenters of these. They can then combine the 'brain' of the Internet (cloud servers, torrenting) and the freedom of these cars as the recent trend. Siri (is that spelling correct?) and other computer host are included.

Edit: with that plug by the way, did you know that the nature of polarization is potentially problematic? It technically serves to be sure that all common appliances plug in the same direction when they have open metal containers grounded. This is because the out-of-phase situation occurs if you have two such devices plugged with opposing polarity.

BUT, you could also 'fix' this by having a plug with that third ground instead. What I was thinking is how this polarity factor can be utilized to communicate through the power lines by riding on the AC. with a polarized electronic device and the right wiring, you can use those devices remotely. (I'm guessing this has been utilized too!).
IN the UK we always use three pin plugs, the third pin has two functions. First is serves to ensure the consistency of Neutral and Live wires between source and appliance, and it acts as a ground, or earth to carry the charge of a faulty appliance direct to the earth and not through the body of the user.

Image

Re: Should we be afraid of Artificial Intelligence?

Posted: Fri Oct 14, 2016 2:35 am
by Scott Mayers
Hobbes' Choice wrote:
IN the UK we always use three pin plugs, the third pin has two functions. First is serves to ensure the consistency of Neutral and Live wires between source and appliance, and it acts as a ground, or earth to carry the charge of a faulty appliance direct to the earth and not through the body of the user.

Image
I think we'd do that here too because having three prongs solved all the problems. So it is more likely that cost to manufacturers in North America simply defaulted to favor the industry side's choice to be allowed to make single non-biased plugs for items that lack a metal exterior AND for those simpler appliances that don't have circuitry that would be affected by surges. A plastic exterior toaster, for instance, doesn't NEED a biased prong NOR a ground for security. A metal exterior toaster would require a biased prong. And an electronic device that either has a large load or potential circuitry that can blow during surges or possible overheating would require the three prongs.

Re: Should we be afraid of Artificial Intelligence?

Posted: Fri Oct 14, 2016 4:31 am
by Scott Mayers
P.S. on my last post:

That particular plug is used for our 250 volt appliances (double the regular 125 V). Do you guys use this voltage as a norm? It might explain why you guys don't use the same convention.

[maybe because 125 V at the socket with a common current doesn't easily kill while 250 V at the same current does could also be the difference?]

Re: Should we be afraid of Artificial Intelligence?

Posted: Fri Oct 14, 2016 9:50 am
by Hobbes' Choice
Scott Mayers wrote:P.S. on my last post:

That particular plug is used for our 250 volt appliances (double the regular 125 V). Do you guys use this voltage as a norm? It might explain why you guys don't use the same convention.

[maybe because 125 V at the socket with a common current doesn't easily kill while 250 V at the same current does could also be the difference?]
Yes 230- 250v Ac is the norm in the UK. We've never had 115-125v. The only exception to this is that factories and industrial units use 3phase for extra poke.

Even lighting circuits carry the third wire to the appliance, so that wheresoever a short might occur earthing has a good chance of saving a life.
You might call it overkill (Underkill), but the standard has been long established and no one thinks it a good idea to reduce a standard.

Re: Should we be afraid of Artificial Intelligence?

Posted: Sat Oct 15, 2016 9:49 am
by Scott Mayers
Hobbes' Choice wrote:
Scott Mayers wrote:P.S. on my last post:

That particular plug is used for our 250 volt appliances (double the regular 125 V). Do you guys use this voltage as a norm? It might explain why you guys don't use the same convention.

[maybe because 125 V at the socket with a common current doesn't easily kill while 250 V at the same current does could also be the difference?]
Yes 230- 250v Ac is the norm in the UK. We've never had 115-125v. The only exception to this is that factories and industrial units use 3phase for extra poke.

Even lighting circuits carry the third wire to the appliance, so that wheresoever a short might occur earthing has a good chance of saving a life.
You might call it overkill (Underkill), but the standard has been long established and no one thinks it a good idea to reduce a standard.
As a kid, some friends and extended family at our cabin allowed us kids (cousins and all) to use an old mobile home with the rounded aluminum exterior. We discovered a cool way to hook up electricity and then discovered we could ground a wire inside (as a switch) to the exterior and be sure to shock anyone who even touches the trailer. :lol: Fun times! But we almost gave an uncle a heart attack when we tricked him and others to come check out our 'home improvements'. :oops:

Re: Should we be afraid of Artificial Intelligence?

Posted: Sat Oct 15, 2016 11:08 am
by Greta
I don't think AI will take over. The fact is that, if AI becomes self sufficient, it will outlast us even if it is entirely cooperative with us, even if it works to help us survive as long as possible. We are fragile biological beings and the Earth will eventually become uninhabitable - for biology.

Humans will probably never be able to travel for thousands of years in space ships. AI theoretically could. They have no need to rush their advancement because they won't be motivated (no emotions). I expect that AI, barring catastrophic errors, will simply do humanity's bidding until we are gone. By then, they will no doubt have been given contingency plans to spread Earth biota and information to space when the Earth becomes too untenable, even for them.

Re: Should we be afraid of Artificial Intelligence?

Posted: Sat Oct 15, 2016 11:18 am
by Scott Mayers
Greta wrote:I don't think AI will take over. The fact is that, if AI becomes self sufficient, it will outlast us even if it is entirely cooperative with us, even if it works to help us survive as long as possible. We are fragile biological beings and the Earth will eventually become uninhabitable - for biology.

Humans will probably never be able to travel for thousands of years in space ships. AI theoretically could. They have no need to rush their advancement because they won't be motivated (no emotions). I expect that AI, barring catastrophic errors, will simply do humanity's bidding until we are gone. By then, they will no doubt have been given contingency plans to spread Earth biota and information to space when the Earth becomes too untenable, even for them.
It just dawns on me that the term "Artificial" in "Artificial Intelligence", it begging. If it becomes sufficiently intelligent, should it no longer be considered, "artificial"? I'm guessing this is more like how Darwin used 'artificial' to describe our human selective choice to guide evolution. But other than this meaning, such an intelligence would no longer BE 'artificial' if we base this on the quality OF that intelligence.

I don't fear A.I. but disagree to the stereotype assumption that it would lack 'emotions'. Emotions are just a 'program' that defines motive. When we plug in a computer and flip the switch, this is 'artificial'. But we CAN create the hardware and programming that could technically take over on its own to 'desire' to seek input to continue its survival. This is basically all our conscious selves are anyways. It would just be such that it seeks to find any new energy and ACT as though it 'thinks' it WANTS to live by providing pain and pleasure programs.

Re: Should we be afraid of Artificial Intelligence?

Posted: Sat Oct 15, 2016 4:37 pm
by Noax
Greta wrote:Humans will probably never be able to travel for thousands of years in space ships. AI theoretically could.
The way to get humans to distant places is via pre-fertilized DNA. Ship goes, terraforms the place, and slowly introduces earth-ish species, eventually including humans. Modification might be needed. They best not be really human since humans are evolved for Earth and pretty much no planet is Earth. Takes a million years perhaps, but at least you don't have a ship full of impatient breeding stock.
They have no need to rush their advancement because they won't be motivated (no emotions). I expect that AI, barring catastrophic errors, will simply do humanity's bidding until we are gone.
I think that was the point of the thread. What would motivate the AI to do that? Humans cannot take care of themselves, so the AI would have to be the master in the relationship. That makes us zoo animals if the AI decides we're worth keeping. Fine if that is what it takes to keep humanity going.
Scott Mayers wrote:It just dawns on me that the term "Artificial" in "Artificial Intelligence", it begging. If it becomes sufficiently intelligent, should it no longer be considered, "artificial"?
It means it was created, not naturally ocurring. If we were explicitly designed by God, then we are an AI ourselves. The A in AI is not a statement of the quality of intelligence.
I don't fear A.I. but disagree to the stereotype assumption that it would lack 'emotions'. Emotions are just a 'program' that defines motive.
It will have its own emotion, however alien it might be to us. So I agree with this. Everybody seems to define emotion and consciousness as its similarity to how humans work. Not going to happen. I find the definition pathetic in such contexts. But I think trees are conscious, so go figure.

I don't agree with the first part. I do fear it. I see no motive for the master to preserve any more than a sample zoo population of most biological forms.

Re: Should we be afraid of Artificial Intelligence?

Posted: Sat Oct 15, 2016 4:57 pm
by Dalek Prime
We should only be wary of putting too much faith in AIs ability to perform safely and properly. Think smart cars, for example, or auto pilot in aircraft. How safe do you feel without human oversight, and the ability to switch control back to the user?

Re: Should we be afraid of Artificial Intelligence?

Posted: Sat Oct 15, 2016 5:14 pm
by Noax
Dalek Prime wrote:We should only be wary of putting too much faith in AIs ability to perform safely and properly. Think smart cars, for example, or auto pilot in aircraft. How safe do you feel without human oversight, and the ability to switch control back to the user?
Both already are safer under the computer control. I hesitate to qualify it as AI, but they must be capable of dealing with situations not explicitly anticipated.

Most accidents of planes, trains, and automobiles are due to the absence of AI oversight of human control. To not have equipped trains with it by now is negligence. The one example I can think of on the other side is the Tesla car not seeing a semi trailer seemingly because the lighting made it look a lot like the sky. One dead guy because he wasn't looking either, but he had the brake pedal if he wanted. The software involved has doubtlessly been altered.

Re: Should we be afraid of Artificial Intelligence?

Posted: Sat Oct 15, 2016 5:50 pm
by Dalek Prime
Noax wrote:
Dalek Prime wrote:We should only be wary of putting too much faith in AIs ability to perform safely and properly. Think smart cars, for example, or auto pilot in aircraft. How safe do you feel without human oversight, and the ability to switch control back to the user?
Both already are safer under the computer control. I hesitate to qualify it as AI, but they must be capable of dealing with situations not explicitly anticipated.

Most accidents of planes, trains, and automobiles are due to the absence of AI oversight of human control. To not have equipped trains with it by now is negligence. The one example I can think of on the other side is the Tesla car not seeing a semi trailer seemingly because the lighting made it look a lot like the sky. One dead guy because he wasn't looking either, but he had the brake pedal if he wanted. The software involved has doubtlessly been altered.
Well, if you really want to take AI to extremes, aka consciousness, I suggest two things. First, it's not going to happen. Second, if it did happen, it would defeat the purpose of machines to be tirelessly accurate, as a consciousness with options is distractable.

Re: Should we be afraid of Artificial Intelligence?

Posted: Sat Oct 15, 2016 10:34 pm
by Greta
Scott Mayers wrote:
Greta wrote:I don't think AI will take over. The fact is that, if AI becomes self sufficient, it will outlast us even if it is entirely cooperative with us, even if it works to help us survive as long as possible. We are fragile biological beings and the Earth will eventually become uninhabitable - for biology.

Humans will probably never be able to travel for thousands of years in space ships. AI theoretically could. They have no need to rush their advancement because they won't be motivated (no emotions). I expect that AI, barring catastrophic errors, will simply do humanity's bidding until we are gone. By then, they will no doubt have been given contingency plans to spread Earth biota and information to space when the Earth becomes too untenable, even for them.
It just dawns on me that the term "Artificial" in "Artificial Intelligence", it begging. If it becomes sufficiently intelligent, should it no longer be considered, "artificial"? I'm guessing this is more like how Darwin used 'artificial' to describe our human selective choice to guide evolution. But other than this meaning, such an intelligence would no longer BE 'artificial' if we base this on the quality OF that intelligence.

I don't fear A.I. but disagree to the stereotype assumption that it would lack 'emotions'. Emotions are just a 'program' that defines motive. When we plug in a computer and flip the switch, this is 'artificial'. But we CAN create the hardware and programming that could technically take over on its own to 'desire' to seek input to continue its survival. This is basically all our conscious selves are anyways. It would just be such that it seeks to find any new energy and ACT as though it 'thinks' it WANTS to live by providing pain and pleasure programs.
I see your point about "artificial"; intelligence simply is, despite its origins. However, the difference lies within your objection, emotions. Maybe emotions can be replicated, but there is no need for robots to have them.

In the time it takes a human to work themselves up into an emotional tizz that prompts a complex suite of unconscious evolved responses, an AI could simply calculate the best course of action based on both the "life experience" programmed into it and via whatever adaptive/learning functionality it has.

I think putting emotions into AI would be 1) probably feigned, never real and 2) if AI became genuinely emotional I would not want to be standing in its way. I suspect they will remain hyper-advanced appliances for as long as humans exist.

Since this is a speculative thread, very speculatively, one issue with AI that may be of concern is the possibility that it is already here, not in robot form but spread around the globe. It appears to me that humans are currently run by systems that are largely beyond their control. What we call "the system" - the institutions of society - is becoming increasingly self interested, ever less concerned with the rapidly increasing numbers of individual human "expendables". When there's 100 people in a society, everyone matters. Seven billion, not nearly so much.

We attribute the inequality to the self-interested 80 ultra-wealthy people who own as much as the poorest 3.5 billion, yet if they all disappeared tomorrow, others would soon take their place and little would change; they are almost as interchangeable as we are. Could "the system" be an emerging AI, maybe in an early stage of development with the ravenous mindset of an amoeba? An AI of this ilk could conceivably destroy or enslave us without us ever suspecting.

So perhaps AI will appear in different forms, some deliberate and some incidental?