There's an ethical discussion going on with driverless cars. The discussion says the car will be faced with who to kill in rare situations when a choice would have to be made. While overall, it's currently felt they will be safer, being they are machines, they will be faced with having to make a human choice in certain situations - who to kill?
How are these machines to be programmed to make the right choice, if there is one? What should their decision be based on? (so far, there has been very little legislative action which will change as time goes along).
PhilX
Edit: here's an article to go with this thread:
http://i100.independent.co.uk/article/w ... JxGf8mVHue