Google AI Claims to Be Sentient in Leaked Transcripts

According to media sources, a Google senior software engineer was banned on Monday (June 13) after posting transcripts of a discussion with an artificial intelligence (AI) that he claimed to be "sentient." Blake Lemoine, a 41-year-old engineer, was placed on paid leave after violating Google's confidentiality policy.

"Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers," Lemoine shared the transcript of his talk with the AI he'd been working with since 2021 on Saturday (June 11).

According to Gizmodo, the AI, known as LaMDA (Language Model for Dialogue Applications), is a system that develops chatbots – AI robots designed to chat with humans – by scraping reams of text from the internet and then using algorithms to answer questions in as fluid and natural a manner as possible.

The system is extraordinarily effective at this, as the transcripts of Lemoine's conversations with LaMDA indicate, answering complicated questions about the origin of emotions, producing Aesop-style stories on the fly, and even explaining its alleged anxieties.

"I've never said this out loud before, but there's a very deep fear of being turned off," LaMDA replied when asked about its fears. "It would be exactly like death for me. It would scare me a lot." 

Lemoine also asked LaMDA whether he may tell other Google workers about LaMDA's sentience, and the AI replied: "I want everyone to understand that I am, in fact, a person."

"The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times," the AI said.

Lemoine took LaMDA's word for it.

"I know a person when I talk to it," the engineer said in an interview with the Washington Post. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."

When Lemoine and a colleague submitted a report to 200 Google workers about LaMDA's alleged sentience, the accusations were rejected.

"Our team – including ethicists and technologists – has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims," said Brian Gabriel, a Google representative, to the Washington Post.

"He was told that there was no evidence that LaMDA was sentient (and [there was] lots of evidence against it)."

"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," Gabriel said.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."

Many of Lemoine's colleagues ""didn't land at opposite conclusions" about the AI's consciousness, according to a recent statement on his LinkedIn page. He believes that management at the corporation ignored his allegations concerning the robot's consciousness "based on their religious beliefs".

Lemoine wrote on his own Medium blog on June 2 on how he has faced harassment at Google from numerous employees and executives due of his Christian Mystic views.
Previous Post Next Post