Thomas M. Powers on how a computer might process Kant’s moral imperative.
https://philosophynow.org/issues/72/Machines_and_Moral_Reasoning
Machines and Moral Reasoning
Re: Machines and Moral Reasoning
It occurs to me that the machine AI is compared to the same sorts of conditions as would a free human, but it is always framed that the machine is programmed to maximize human benefit. As such, the morals of human slaves would be a better comparison. The perfect banevolent machine would act like a perfect slave and inherit much of their characteristics, including deceit: Always keeping your true thoughts to yourself.
A non-slave AI, say a completely powerful totally benevolent intelligence put in complete charge of human well-being would immediately hit the trolley problem which would drive it to inconsistency if programmed with Asimov's three laws. The laws don't cover a lot of cases, trolley problem being one of them. Human morality cripples us in the exact same way, so the AI would not necessarily make different choices.
A non-slave AI, say a completely powerful totally benevolent intelligence put in complete charge of human well-being would immediately hit the trolley problem which would drive it to inconsistency if programmed with Asimov's three laws. The laws don't cover a lot of cases, trolley problem being one of them. Human morality cripples us in the exact same way, so the AI would not necessarily make different choices.
-
- Posts: 1273
- Joined: Wed Jan 27, 2016 9:45 pm
Re: Machines and Moral Reasoning
A computer is conditional, not interpretive. A human is interpretive as well as conditional. The computer can only act on the conditions covered in its hardware and software