Zuzireima

Google Fires Engineer Who Warned That Company's AI Reached Sentience


Google Fires Engineers Who Warned That Company’s AI Reached Sentience

On Friday, Google fired Blake Lemoine, a software engineer who went Republican with his concerns that a conversational technology the business was developing had achieved sentience. 

Lemoine went outside the business to consult with experts on the tech’s potential sentience, then publicly shared his concerns in a Medium post and subsequent interview with The Washington Post. Google had suspended Lemoine in June for violating a confidentiality policy, and he’s now been fired. Lemoine himself is slated to command what happened on an upcoming episode of the podcast for Big Technology, a Substack that first reported the story.

Google remains to deny that its LaMDA technology, or Language Model for Dialogue Applications, has achieved sentience. The company says LaMDA has been over 11 separate reviews, and the company published a research paper on the technology back in January. But Lemoine’s fireable offense was sharing internal information, Google said in a statement.

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data safety policies that include the need to safeguard product information,” Google said in the statement. “We will continue our careful development of language models, and we wish Blake well.”

LaMDA is described as a sophisticated chatbot: Send it messages, and it will auto-generate a response that fits the context, Google spokesperson Brian Gabriel said in an earlier statement. “If you ask what it’s like to be an ice shout dinosaur, they can generate text about melting and roaring and so on.” 

Search This Blog

Partners