The (Artificial) Truth Is Out There 🤖

A Google employee sticks his neck out for the idea that bots may have souls. He’s probably wrong, but it’s still a big problem for Google.

Sponsored: Today’s issue is brought to you by Morning Brew, the daily email (2.6 million readers strong!) that delivers the latest news from Wall Street to Silicon Valley. Business news doesn't have to be dry and dense—make your mornings more enjoyable, for free.

Artificial intelligence now has its Fox Mulder.

Artificial intelligence, in case you haven’t noticed, seems to be at the center of nearly every major software innovation in recent years, from improvements in mobile camera quality to new features that create a lane for language in modern tech.

It turns out that some AI officials have grown concerned about whether at some point this technology is going to become sentient, or at least understanding of its lot in life enough that we need to discuss things like ethics.

One Google researcher says we’re already there. Blake Lemoine, an engineer is part of the company’s Responsible AI discipline, has suggested that LaMDA, a conversational AI technology that he was working on, has become “sentient,” with the ability to control its surroundings and concerns about how it’s being used by its corporate creator.

“I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense,” LaMDA reportedly said in a chat transcript shared by Lemoine over the weekend.

With this knowledge, Lemoine became more than convinced that this bot was good at language—he was convinced this bot had a distinct soul and emotional core.

(Google didn’t agree; it suspended him, which led Lemoine to take his story to the press and the public.)

https://twitter.com/nerdjpg/status/1536078119731441669

This situation has, understandably, generated a lot of critiques and skepticism (and memes), but it is nonetheless worth talking about because the framing of this story is so clearly unusual.

While I can’t peer into Google’s code and say for certain that Lemoine is just seeing things, I will say that this does highlight a problem for Google. Just not the one Lemoine thinks.

I’d like to posit about this story that there are two separate considerations here that need to be discussed, both of which can be true: First, the idea that AI can become so convincing in its use of language that it fools people like Lemoine; and second, the fact that Google struggles to understand its AI ethicists is a continuing problem that threatens the company in the long run.

In some ways, it’s good to think of Lemoine in the way that you might think of someone like Fox Mulder from The X-Files. Given his job and his predisposition, it’s understandable why someone like him would be likely to believe something farcical like AI actually having emotions and feelings. After all, as someone who works in a “Responsible AI” discipline, it reinforces things he’s already decided are worth believing.

As Clive Thompson thoughtfully notes, the reason why he believed this is because the AI showed vulnerability, something that real humans show. But that means that AI is getting better at producing something similar to vulnerability, not that the AI is sentient.

“It’s a story about the danger of wanting to believe in magic so badly that you’ll manufacture a simulation of it and say it’s real,” Joshua Topolsky tweeted last night by way of explanation.

However, more broadly, Google has been really bad about how it has treated its AI teams, with employees like Timnit Gebru and Margaret Mitchell being shown the door after raising broader questions about the work and the department. That Lemoine was suspended, rather than fired, suggests that Google wants to be careful not to make a mistake here, even if it has to deal with the public nature of something its leadership clearly disagrees with.

Google has to be sensitive about how it handles this situation, because it could be setting up someone like Lemoine to speak out against the company for decades if it plays this poorly. It’s not that he’s right, and odds are that he’s probably wrong. It’s that Google could be seen as stifling legitimate internal debate about the AI tools it builds if it plays its cards poorly here.

After all, it has already been accused of doing just that.

Time limit given ⏲: 30 minutes

Time left on clock ⏲: alarm goes off

Ernie Smith

Your time was just wasted by Ernie Smith

Ernie Smith is the editor of Tedium, and an active internet snarker. Between his many internet side projects, he finds time to hang out with his wife Cat, who's funnier than he is.

Find me on: Website Twitter

Related Reads