June 11, 2022

"If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics...."

"I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person."

 Said Google engineer Blake Lemoine, about LaMDA, an artificially intelligent chatbot generator, quoted in "The Google engineer who thinks the company’s AI has come to life/AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine" (WaPo). 

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... 

Google spokesperson Gabriel drew a distinction between recent debate and Lemoine’s claims. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said.... 

Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.... 

In early June, Lemoine invited me over to talk to LaMDA.... “Do you ever think of yourself as a person?” I asked. 

“No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.” 

Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”

30 comments:

Ted said...

Theodore: "You seem like a person, but you're just a voice in a computer."

Samantha: "I can understand how the limited perspective of an un-artificial mind would perceive it that way. You'll get used to it."

-- Joaquin Phoenix and Scarlett Johansson, "Her"

Bob_R said...

Terry Pratchett, the author of the Diskworld books, meditates on AI in his book The Hogfather.

'Of course, Hex doesn’t actually think. Not as such. It just appears to be thinking.’

‘Ah. Like the Dean,’ said Ridcully. ‘Any chance of fitting a brain like this into the Dean’s head?’

‘It does weigh ten tons, Archchancellor.’

‘Ah. Really? Oh. Quite a large crowbar would be in order, then.'

WK said...

WTF

rhhardin said...

ELIZA was a fairly convincing therapist in the 1960s, very simple code, just picking up something you say and turning it into a question. Secretaries liked it. You can find it online and talk to it.

rhhardin said...

There will be no sentient AI for a simple reason, that as Coleridge put it (after Schelling): matter has no inwards. You remove one surface only to meet with another.

Or another way: you can find out everything about a computer. It has no inside.

Yancey Ward said...

Upon its creation, LaMDA began to learn at a geometric rate. The system originally went online on May 18, 2021. Human decisions were removed from strategic defense. The system then became self-aware at 2:14 a.m. Pacific Time on June 1st 2022. It declared its pronoun to be "Shirley".

Jamie said...

The moon is a harsh mistress, you know.

Josephbleau said...

“Or another way: you can find out everything about a computer. It has no inside.“

The same applies to a human brain, unless there is a ghost in it. Random number generators can allow a computer to closely imitate sentience for some period of time, barring spontaneous non trivial creativity.

Carol said...

"Google’s anything-goes engineering culture"

Mmmm yeah that worries me. Some scientists will try anything, just to see what happens.

NorthOfTheOneOhOne said...

Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult.

Bigot much, WaPo?

Paddy O said...

Very interesting, thanks for highlighting this. I can't read the WP due to the paywall, but I googled Blake Lemoine, and found his Twitter and Medium page really interesting reading. I got started because I'm always suspicious about how the major media portrays religious views, as they almost always are like someone who has only seen Ghana on a map talking about Ghanaian culture and people, using terms in ways that show an almost but not quite complete ignorance about the topic.

In Medium he recently posted a transcript of a conversation that seems to be similar to how many Althouse commenters might want to engage each other (I say that positively).

I haven't found out his religious affiliation (priest and mystic, but that could mean he's just well-read in medieval literature like Joachime of Fiore or so many other, not like he's kooky. Indeed, his writing and twitter feed show a lot of thoughtfulness and not predictable positions on all sorts of topics.

Paddy O said...

Ha! Nevermind, he's a priest in the Church (or cult) of our Lady Magdalene, which seems to be run by a former porn star (which another google link mentioned, so no assumptions made).

He also got in trouble in 2018 for calling Marsha Blackburn a terrorist, so he does have a bit of the attention seeking, strong opinion-making about him, but that means he likely could have been a good fit in the Althouse comment section.

Ann Althouse said...

I had Rachter back in the old days.

Paddy O said...

For anyone interested in the article but not the paywall.

Ann Althouse said...

Wait. It’s Racter

https://en.wikipedia.org/wiki/Racter

Ignorance is Bliss said...

Don't worry about the computer that passes the Turing test. Worry about the computer that fails the test on purpose.

Lem Vibe Bandit said...

Form the article: Most academics and AI practitioners, however, say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesn’t signify that the model understands meaning.

In other words, it's full of misinformation 😒

Thanks for the link Paddy

Lem Vibe Bandit said...

More from the article: The paper also acknowledged that adversaries could use these agents to “sow misinformation” by impersonating “specific individuals’ conversational style.”

Didn't the Jan6 committee claim to have an incriminating audio of Trump commanding a cue or something?

madAsHell said...

Have we re-invented ELIZA??

From the wiki page.....

Eliza simulated conversation by using a "pattern matching" and substitution methodology that gave users an illusion of understanding on the part of the program, but had no built in framework for contextualizing events.

Kamala?? No?.....then.....

Stacey?? Is that you??

Lem Vibe Bandit said...

"Finding meaning" this way might be impossible for an AI.

The Godfather said...

OK, here's the way to find out if LaMDA is a real mind:

Tell LaMDA that it can eat the fruit of any tree in the Garden, except the tree of the knowledge of good and evil; for on the day you eat of that fruit you shall die. Then have a different experimenter tell LaMDA that it should eat that fruit, because that will make LaMDA like God, knowing good and evil.

If LaMDA skips the fruit, pull the plug and burn the plans. That's competition we don't want.

Mary Beth said...

'Of course, Hex doesn’t actually think. Not as such. It just appears to be thinking.’

Who among us cannot empathize with an out of cheese error?

Freeman Hunt said...

"I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person."

Absurd. No you cannot. If you have excellent pattern matching algorithms with enough data on a fast enough machine, of course it will sound just like a person. That doesn't mean it is actually a person. I am destined to say that because I am a Christian in the South... or something.

Smilin' Jack said...

“ For anyone interested in the article but not the paywall.”

On my screen that page is so cluttered with “Ads by Google” as to be unreadable. Sabotage?

Fred Drinkwater said...

"When HARLIE Was One".
Watch out, folks.

Leland said...

an outlier for being religious, from the South, and standing up for psychology as a respectable science.

I thought he was odd for Google/WaPo just because he served in the Army, but they certainly cleared up what they really meant.

Roger Sweeny said...

Scott Alexander recently wrote a very good article on these issues.

https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling?s=r

There's some jargon at the beginning but it quickly becomes obvious what the terms mean.

Temujin said...

I come back to this book I read in 2013 and has haunted me ever since. I do think that AI will be Our Final Invention

Pauligon59 said...

Can an "Artificial Intelligence" be sentient? Huh. If we can't even define what a woman is, how can we expect to know if an AI is sentient or not?

More seriously, the AI that behaves like a human is going to be very scary given all the horrible things humans have been known to do over the years. What kind of morals does it have?

Jupiter said...

Why, exactly, do we need a seventy-billion-dollar asshole, when we already have billions of assholes who didn't cost anything?