Google’s ethical principles for the use of artificial intelligence are little more than a smokescreen, but they show that many engineers are rightly worried by the possible uses of the technology they’re developing

Frankenstein’s monster haunts discussions of the ethics of artificial intelligence: the fear is that scientists will create something that has purposes and even desires of its own and which will carry them out at the expense of human beings. This is a misleading picture because it suggests that there will be a moment at which the monster comes alive: the switch is thrown, the program run, and after that its human creators can do nothing more. They are left with guilt, perhaps, but no direct responsibility for what it goes on to do. In real life there will be no such singularity. Construction of AI and its deployment will be continuous processes, with humans involved and to some extent responsible at every step.

This is what makes Google’s declarations of ethical principles for its use of AI so significant, because it seems to be the result of a revolt among the company’s programmers. The senior management at Google saw the supply of AI to the Pentagon as a goldmine, if only it could be kept from public knowledge. “Avoid at ALL COSTS any mention or implication of AI,” wrote Google Cloud’s chief scientist for AI in a memo. “I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”

Continue reading…

Read More The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster | Editorial

Facebook Comments