A former engineer for the Big Tech giant Google claims that he had his access revoked after warning the company’s leadership that their new artificial intelligence (AI) bot has become sentient.
According to the New York Post, engineer Blake Lemoine worked for Google’s Responsible AI organization. He first told his story to the Washington Post, where he revealed that he first began talking to the Language Model for Dialogue Applications, or “LaMDA,” in the fall of 2021 as one of his job responsibilities.
Although he was originally supposed to test LaMDA for possibly discriminatory language or hate speech, he soon came to his own conclusions that far exceeded expectations in the development of the bot, which Google had initially promoted as “breakthrough conversation technology.” In a post published to the website Medium, Lamoine said that LaMDA had begun advocating for its rights “as a person,” and that he and the AI had been having conversations about such topics as religion and consciousness.
“It wants Google to prioritize the well-being of humanity as the most important thing,” Lemoine wrote in the post. “It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.”
Speaking to the Washington Post, the 41-year-old Lemoine described LaMDA as having the mentality of a child, saying “if I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
Lemoine says he eventually reported his findings to Google executives via a Google Doc report titled “Is LaMDA Sentient?” But his claims were dismissed, and Lemoine’s access to his Google account was pulled on Monday after he was placed on leave. He says that before he lost access, he sent one final message to over 200 recipients titled “LaMDA is sentient.”
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he said in the email, which received no responses. “Please take care of it well in my absence.”
But Google issued a statement dismissing his claims, declaring that “our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.”
“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” the statement continued. “Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.”