Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.
Cade Metz at the New York Times 11/21
Of course here the "intelligence" is basically human. How would an autonomous machine "mind" avoid bias or describe hate speech or eliminate anything that might be deemed arbitrary?Facial recognition systems and digital assistants show bias against women and people of color. Social networks like Facebook and Twitter fail to control hate speech, despite wide deployment of artificial intelligence. Algorithms used by courts, parole offices and police departments make parole and sentencing recommendations that can seem arbitrary.
Computer scientists and ethicists will program the AI entities to reflect their own rooted existentially in dasein moral and political value judgments. Their own prejudices. Just as we do when we indoctrinate our children. When is that part going to be addressed by them? And if down the road machine intelligence reaches the point where they are "on their own", how will it be decided what is or is not rational and virtuous behavior? Philosophically? Will their great minds draw on our own great minds...or will they dispense with human intelligence altogether?A growing number of computer scientists and ethicists are working to address those issues. And the creators of Delphi hope to build an ethical framework that could be installed in any online service, robot or vehicle.
Same thing. As long as it is flesh and blood human beings who are doing the informing then one or another partisan -- intolerant? -- set assumptions will be used in regard to the actual legislation enacted to either reward or punish particular behaviors.“It’s a first step toward making A.I. systems more ethically informed, socially aware and culturally inclusive,” said Yejin Choi, the Allen Institute researcher and University of Washington computer science professor who led the project.