Killer Robots
Killer Robots
"Nobel Peace laureate Jody Williams is joining with Human Rights Watch to oppose the creation of killer robots — fully autonomous weapons that could select and engage targets without human intervention. In a new report, HRW warns such weapons would undermine the safety of civilians in armed conflict; violate international humanitarian law; and blur the lines of accountability for war crimes. Fully autonomous weapons do not exist yet, but high-tech militaries are moving in that direction with the United States taking the lead. Williams, winner of the 1997 Nobel Peace Prize for her work with the International Campaign to Ban Landmines, joins us along with Steve Goose, director of Human Rights Watch’s Arms Division. [Includes rush transcript]"
http://www.democracynow.org/2012/11/20/ ... l_laureate
I watched this fascinating program this morning. It is only about ten minutes long. A moral issue is raised and cited, but not discussed, during the segment.
Is it possible to construct a robot that thereafter is absolutely autonomous?
http://www.democracynow.org/2012/11/20/ ... l_laureate
I watched this fascinating program this morning. It is only about ten minutes long. A moral issue is raised and cited, but not discussed, during the segment.
Is it possible to construct a robot that thereafter is absolutely autonomous?
-
- Posts: 7349
- Joined: Tue Mar 03, 2009 12:02 am
- Contact:
Re: Killer Robots
Killer robots are small potatoes. In 1945, the U.S. wiped out hundreds of thousands of civilians with just two bombs.
- The Voice of Time
- Posts: 2234
- Joined: Tue Feb 28, 2012 5:18 pm
- Location: Norway
Re: Killer Robots
Well, that was the Atomic Age... this age if called the Age of Information, and we have a new threat now to deal with.
Re: Killer Robots
This thread and my question relates to Artificial Intelligence. http://en.wikipedia.org/wiki/Artificial_intelligencetbieter wrote:"Nobel Peace laureate Jody Williams is joining with Human Rights Watch to oppose the creation of killer robots — fully autonomous weapons that could select and engage targets without human intervention. In a new report, HRW warns such weapons would undermine the safety of civilians in armed conflict; violate international humanitarian law; and blur the lines of accountability for war crimes. Fully autonomous weapons do not exist yet, but high-tech militaries are moving in that direction with the United States taking the lead. Williams, winner of the 1997 Nobel Peace Prize for her work with the International Campaign to Ban Landmines, joins us along with Steve Goose, director of Human Rights Watch’s Arms Division. [Includes rush transcript]"
http://www.democracynow.org/2012/11/20/ ... l_laureate
I watched this fascinating program this morning. It is only about ten minutes long. A moral issue is raised and cited, but not discussed, during the segment.
Is it possible to construct a robot that thereafter is absolutely autonomous?
Does anyone know anything about AI?
Re: Killer Robots
I can't say I'm fully informed on AI, but I do think you've raised a very important and interesting question, especially if you're willing to expand the scale of your inquiry.
It's not just robots that are coming, but a wide range of powerful new technologies that have great potential to serve both good and evil. Genetic engineering comes to mind as just one example of many.
To me the underlying question is, do we control knowledge, or does it control us?
Do we even have the choice to say, stop the develop of genetic engineering, or AI, and other such revolutionary developments? Or are we just along for the ride?
To me, it's not at all clear that we are driving the boat. If we were in control, why in the world would we create atomic weapons, which have the power to erase 10,000 years of human civilization in 30 minutes?
It's not just robots that are coming, but a wide range of powerful new technologies that have great potential to serve both good and evil. Genetic engineering comes to mind as just one example of many.
To me the underlying question is, do we control knowledge, or does it control us?
Do we even have the choice to say, stop the develop of genetic engineering, or AI, and other such revolutionary developments? Or are we just along for the ride?
To me, it's not at all clear that we are driving the boat. If we were in control, why in the world would we create atomic weapons, which have the power to erase 10,000 years of human civilization in 30 minutes?
Re: Killer Robots
if this age is now called the 'Age of Information', or the 'Golden Age', what will the next age be called? the 'Age Of Civilization'?The Voice of Time wrote:Well, that was the Atomic Age... this age if called the Age of Information, and we have a new threat now to deal with.
Re: Killer Robots
Could be, at least in theory. That is not to say that such a robot would have "free will" any more than we do, though. The actions of the robot would still depend upon its programming. If it did something other than intended, it would be malfunctioning.tbieter wrote:Is it possible to construct a robot that thereafter is absolutely autonomous?
The answer depends on how deep you are willing to dig. On a very fundamental level, I don't think anybody is in control of anything. Everything that happens is a consequence of what happened before, and thus inevitable. But we don't live our lives on that fundamental level; we experience choices as real. On the level where choices exist, it's still hard, and probably not even desirable, to avoid developing new technologies. The question is how we put them to use.Felasco wrote:To me the underlying question is, do we control knowledge, or does it control us? Do we even have the choice to say, stop the develop of genetic engineering, or AI, and other such revolutionary developments? Or are we just along for the ride?
As far as ethics goes, the difference between nuclear thecnology and a kitchen knife, for instance, is only a matter of scale. A kitchen knife is a dangerous tool, mostly used for cooking, but sometimes also for stabbing people to death. And accidents do happen. Nuclear technology is a vastly more dangerous tool, mostly used for producing power, but sometimes also for killing people. And accidents do happen, as the Fukushima disaster recently reminded us of.
- The Voice of Time
- Posts: 2234
- Joined: Tue Feb 28, 2012 5:18 pm
- Location: Norway
Re: Killer Robots
The Nano Agebus2bondi wrote:if this age is now called the 'Age of Information', or the 'Golden Age', what will the next age be called? the 'Age Of Civilization'?The Voice of Time wrote:Well, that was the Atomic Age... this age if called the Age of Information, and we have a new threat now to deal with.
Re: Killer Robots
Notvacka, thanks for your thoughtful response.
We assume we can keep creating ever more powerful tools, and that we are clever and sane enough to manage them to avoid catastrophic outcomes. At what point does this stop being true? At what point do the killer robots become so smart they turn on us? At what point do we lose control of all these technologies?
As we've seen, we have only marginal control over nuclear weapons. It's inevitable that smaller and smaller organizations will be getting these weapons. It's inevitable that some of these groups will be insane. It's inevitable that cities containing millions will be destroyed.
Even the two most advanced countries in the world came within a whisker of destroying each other in 1962.
As I see it, the problem is that every technology comes with a price tag, and as the technology becomes ever more powerful, the price tag will inevitably become completely unacceptable.
Probably only an epic catastrophe will bring us to take such questions seriously.
While not disagreeing with this point necessarily, we might set it aside for now. Because if it's true, there's no point in anybody talking about anything. And the talking show must go on!The answer depends on how deep you are willing to dig. On a very fundamental level, I don't think anybody is in control of anything. Everything that happens is a consequence of what happened before, and thus inevitable.
Yes, that's the point, accidents do happen. Do we want to create ever more powerful tools that when misused, efficiently erase civilization from the face of the Earth, or radically change our situation for the worse?As far as ethics goes, the difference between nuclear thecnology and a kitchen knife, for instance, is only a matter of scale. A kitchen knife is a dangerous tool, mostly used for cooking, but sometimes also for stabbing people to death. And accidents do happen. Nuclear technology is a vastly more dangerous tool, mostly used for producing power, but sometimes also for killing people. And accidents do happen, as the Fukushima disaster recently reminded us of.
We assume we can keep creating ever more powerful tools, and that we are clever and sane enough to manage them to avoid catastrophic outcomes. At what point does this stop being true? At what point do the killer robots become so smart they turn on us? At what point do we lose control of all these technologies?
As we've seen, we have only marginal control over nuclear weapons. It's inevitable that smaller and smaller organizations will be getting these weapons. It's inevitable that some of these groups will be insane. It's inevitable that cities containing millions will be destroyed.
Even the two most advanced countries in the world came within a whisker of destroying each other in 1962.
As I see it, the problem is that every technology comes with a price tag, and as the technology becomes ever more powerful, the price tag will inevitably become completely unacceptable.
Probably only an epic catastrophe will bring us to take such questions seriously.