Google’s “Duplex” AI system was the most talked about product at Google I/O because it called into question the ethics of an AI that cannot easily be distinguished from a real person’s voice. The service lets its voice-based digital assistant make phone calls and write emails for you, causing many to ask if the system should come with some sort of warning to let the other person on the line know they are talking to a computer. According to Business Insider, “a Google spokesperson confirmed […] that the creators of Duplex will ‘make sure the system is appropriately identified’ and that they are ‘designing this feature with disclosure built-in.'” From the report: Here’s the full statement from Google: “We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important. We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.” Google CEO Sundar Pichai preemptively addressed ethics concerns in a blog post that corresponded with the announcement earlier this week, saying: “It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately — and we feel a deep sense of responsibility to get this right.” In addition, several Google insiders have told Business Insider that the software is still in the works, and the final version may not be as realistic (or as impressive) as the demonstration.
Read more of this story at Slashdot.