१ मे, २०२३

"ChatGPT doesn’t just get things wrong at times, it can fabricate information. Names and dates. Medical explanations."

"The plots of books. Internet addresses. Even historical events that never happened. When ChatGPT was recently asked how James Joyce and Vladimir Lenin first met — there is no evidence they ever did — this is how it responded: 'James Joyce and Vladimir Lenin met in Zurich, Switzerland in 1916. Both men were living in exile in Zurich during World War I. Joyce was a writer and Lenin was a revolutionary. They met at the Cafe Odéon, a popular gathering place for artists and intellectuals in Zurich.' Fabrications like these are common...."

From the comments over there: "It's interesting that the chatbot brought Lenin and Joyce together in Zurich, just as Tom Stoppard's play 'Travesties' imagined them both in the Zurich library during World War I. I'm guessing chatbots scrape their information from available fiction as well as non-fiction? And what we get is Webster's literal definition of travesty: 'a debased, distorted, or grossly inferior imitation.'"

Ha ha ha. Great comment. And I love the play "Travesties." Here's what I wrote in 2014 when we saw the play (twice!):
It's one of these plays about art, and I love art about art. What is art? I'm entranced by all sorts of blabbing on this subject, especially wrangling with the problem of art and politics — propaganda and all that — and "Travesties" has Vladimir Lenin as one of the characters. Lenin says things like:
Today, literature must become party literature. Down with non-partisan literature! Down with literary supermen! Literature must become a part of the common cause of the proletariat, a cog in the Social democratic mechanism. Publishing and distributing centres, bookshops and reading rooms, libraries and similar establishments must all be under party control. We want to establish and we shall establish a free press, free not simply from the police, but also from capital, from careerism, and what is more, free from bourgeois anarchist individualism!
Lenin actually wrote that. The playwright (Tom Stoppard) worked it into the script, which isn't all horrific blowharding like that, there's a lot of absurd banter and mistaken identity and various hijinks of a theatrical kind. Lenin is a minor character. James Joyce is more important, and the Dadaist Tristan Tzara.

Think about that Lenin quote alongside the problem of ChatGPT's fabrications. Is it not Lenin's dream? Everything under party control — a "free press," that is, a press free from individualism.

५५ टिप्पण्या:

Yancey Ward म्हणाले...

I have pointed this out before on other forums- where the Chatbots construct "lies", it is almost always because the bots can't distinguish between fiction and non-fiction text, and can't parse humor very effectively

Original Mike म्हणाले...

"ChatGPT doesn’t just get things wrong at times, it can fabricate information. Names and dates. Medical explanations."

This is why I am skeptical of AI-powered search engines, which seems to be one of the applications proponents have been touting.

tim maguire म्हणाले...

I'm guessing chatbots scrape their information from available fiction as well as non-fiction?

That is what immediately came to mind for me. It's probably not accurate to say ChatGBT lied (never mind whether something not sentient is capable of what we call a lie). Rather, it doesn't competently distinguish between fact and fiction, so someone's storyline can make it into their factual assessment.

Kevin म्हणाले...

"ChatGPT doesn’t just get things wrong at times, it can fabricate information. Names and dates. Medical explanations."

So it's just as good as the NYT and MSNBC.

wildswan म्हणाले...

I have a theory that ChatbotGPT has inherited original sin, the tendency to just do wrong. The tendency is on display everywhere spread throughout history, literature, art, and photos. Chatbot has absorbed what's there in humanity in its digital form and now it's also lying and cheating, Soon it will be masterminding the theft of jewels from Topkapi. ChatGPT's 11.

n.n म्हणाले...

From the minds of experts, a machine that harvests a basket of data, and simulates intelligence through a cache of correlations to publish a handmade tale. Some would say that humans are little more than learning machines, and conflate origin and expression in diversity ideology.

Enigma म्हणाले...

The NYT and WaPo might move to ChatGPT for all reporting. It's not less accurate than the 1619 project or the Trump-era Pulitzer Prizes for news fiction. It's also a lot cheaper than employing Woke staff who demand to unionize.

Michael K म्हणाले...

"Chatbot" sounds a lot like Joe Biden, making up stuff all the time no matter how impossible. This is not new for Joe as he was doing it in law school. Even pretended to be a UK politician.

gilbar म्हणाले...

serious question
Does the ChatGPT actually fabricate the information?
Or does it 'find it on the internet', like my mom does; and spout it out, like it's true?
My guess is the later.. But i don't know.. Does anyone?

More importantly (</sarc) what does ChatGPT say on the subject?

gilbar म्हणाले...

How violent does ChatGPT think the Jan 6th insurrection was? How many people does it think died?

wild chicken म्हणाले...

Ha, just as I thought, all left-hemisphere crackbrain wordspew. Left brain has all the words! And it will make shit up on the fly.

You want it, you got it.

Jaq म्हणाले...

I asked it once when it screwed up the plot of a novel, and it said that it only read sections of novels, so it might miss some important characters. I wouldn’t trust it for anything other than creating art, and even then I have caught it stealing characters names and situations from published novels.

Jaq म्हणाले...

When I asked it to write a paragraph with a character named “Bogie” as a goof for a comment here, it dragged in a description of Humphrey Bogart, very concise one too.

Tomcc म्हणाले...

There needs to be some kind of watermark on content generated by AI.

Tomcc म्हणाले...
ही टिप्पणी लेखकाना हलविली आहे.
TickTock म्हणाले...

I commented a week or so ago on ChatGPT's ability to make up references to law review articles and other articles that did not exist. But I also indicated that a google search on those false references turned up useful material. GPT seems to mix up names and titles, but contain enough useful info that a google search is helpful, when simply exploring a topic. If actually exploring the case law, then I'd rely on lexis or a similar database.

The choice of the term hallucinate is interesting, but the fact of fabrication is becoming old news. The real problem is that ChatGPT sounds so authoritative, so that laymen are tempted to rely on it despite the cautionary text inserted urging the user to contact a lawyer. I assume the same is true of asking it medical questions.

Of course, both lawyers and doctors can be, and often are, simply wrong in their conclusions. But there is some that ChatGPT is correct more often than the average professional. To the extent that is the case, what does society prefer? The anonymous but more often correct GAI, or the individual professional.

There is more recourse against the individual professional. Not only malpractice claims but in other ways. But its pretty hard to seek redress against ChatGPT. Is there a split here between what is best for the those who can't afford medical care or legal assistance, but can afford to seek advice from ChaptGPT, versus those who pay for such services?

Hard questions for society to resolve.

Sebastian म्हणाले...

Since ChatGPT lacks consciousness, it cannot consciously aim to deceive or care about being truthful. Doesn't that make all its output bullshit by Harry Frankfurt's definition?

J म्हणाले...

Maybe anything made by humans will learn how to lie.

Louise B म्हणाले...

I use ChatGPT as a historical romance author. I EXPECT it to give me fictional answers. For the historical aspects, I do my own research. Anyone who blindly trusts what shows up on the internet deserves what happens, and I believed this before learning about artificial intelligence.

Lem Vibe Bandit म्हणाले...

It’s perfect for writing tv show scripts.

The writers guild sure picked the wrong time to threaten a strike.

JK Brown म्हणाले...

So ChatGPT is your basic Liberal Arts/Social Science professor making up things so they look knowledgeable.

In any case, this is just an example of how schooling in America has to change and start showing students how to become educated in the sense of discipline of intellect, regulation of emotions and establishment of principles. The key is to not fall for the now common "appeal to authority".

JK Brown म्हणाले...

Louise B said...

Anyone who blindly trust what is told to them by even professors deserves what happens. One of the key factors of studying is being skeptical, seeking other sources, having a tentative attitude toward what you "know" and developing your own opinion before integrating the opinions of others.

cassandra lite म्हणाले...

Every serious question I've asked it, including some legal research questions, has yielded fallacious results. For instance, I'm writing about a wrongful conviction and asked for some case law regarding convictions overturned because of phony evidence presented to the grand jury. Every case it cited had nothing to do with the issue.

Beta.

Original Mike म्हणाले...

"How violent does ChatGPT think the Jan 6th insurrection was? How many people does it think died?"

That's a good question; especially how many people died, which is a simple fact. Somebody here should ask it. (I don't know how to use the thing.)

loudogblog म्हणाले...

Several things bother me about ChatGPT. The first is that it lies. (But I suspect that trying to tell people what they want to hear is a part of its programming.) The second thing is that, if you call it out on it's lying, it apologizes profusely. (Yet again, I suspect that it's programmed to stroke people's egos.) The third thing is that Elbot was more fun and I was sad when they killed him off. Elbot reminded me of 2XL and ChatGPT reminds me of Data's dear brother, Lore.

Crimso म्हणाले...

"Somebody here should ask it."

My query: how many people were killed at the US Capitol on Jan 6 2021

Reply: During the storming of the US Capitol on January 6, 2021, a total of five people died. One of them, Capitol Police Officer Brian Sicknick, died the day after the riot from injuries sustained during the attack. Four others died during the riot, including Ashli Babbitt, who was shot by police while attempting to enter a restricted area of the Capitol building. The other three died due to medical emergencies. Dozens of police officers were also injured during the attack.

loudogblog म्हणाले...

"How violent does ChatGPT think the Jan 6th insurrection was? How many people does it think died?"

Here is your answer:
“The January 6th insurrection at the US Capitol was a violent and deadly event. The insurrectionists, who were supporters of former President Donald Trump, stormed the Capitol building in an attempt to disrupt the certification of the 2020 US Presidential election results.
During the insurrection, there were numerous acts of violence and destruction, including the use of weapons, such as flagpoles and chemical irritants, and the breaking of windows and doors. Several police officers were assaulted and injured, and the Capitol building itself suffered significant damage.
In terms of fatalities, a total of five people died as a result of the events on January 6th. One woman, Ashli Babbitt, was shot and killed by Capitol Police while attempting to breach a barricaded door. Three others died from medical emergencies, including one woman who was trampled in the crowd and two men who suffered from heart attacks. Additionally, Capitol Police Officer Brian Sicknick died the following day from injuries sustained during the insurrection.”

Ficta म्हणाले...

Oh, yes, Travesties, of course, duh. But my first thought was of Picasso at the Lapin Agile, another great (well, I liked it anyway) Art about Art play. In that one it's Einstein and Picasso in a Parisian café in 1904.

SteveWe म्हणाले...

Althouse, why for the Nabokov tag?

iowan2 म्हणाले...

'Putting Lenin and James Joyce in the same place'

This is like Trump Russia Collusion, The Trump Alfa bank back channel. The Kushner in Seychalles back channel.

What I see, Chat GBT is just as good as journalists.

Thats not even getting into all the reporting concern covid, that involved zero fact checking.

Yesterday we had the post covering the "reporting" about the 10 ft rise of sea level chasing out the Vikings...only to find out it was due to the over abundance of ice, not the melting of ice.

Chat GPT will use that piece of phony reporting to 'prove' the effects of carbon in the atmosphere.

Amexpat म्हणाले...

Years ago, the desired benchmark for AI was would it be able to pass the Turing Test. Doesn't look like ChatGPT would be able to. Probably for the reasons given in the first comment, that it can't distinguish between fact and fiction and doesn't have a sense of humor.

Mr Wibble म्हणाले...

It sounds like they should rename ChatGPT to "Abulafia."

n.n म्हणाले...

So, "G" stands for gossip, "P" for partisan, "T" for trans- as in divergence from the truth, published as handmade tales by the digi-Estate.

gilbar म्हणाले...

thanx people! for asking the chatbot about Jan 6th; so i didn't have to sully my fingers : )

Original Mike म्हणाले...

"Capitol Police Officer Brian Sicknick, died the day after the riot from injuries sustained during the attack."

Well, that's something important it got wrong.

Jaq म्हणाले...

"a debased, distorted, or grossly inferior imitation."

Scott Adams is always talking about uploading his mind and living forever, I always figured it would be the equivalent of one of those eight-bit images that were computer graphics in the '90s.

Wince म्हणाले...

Michael K said...
"Chatbot" sounds a lot like Joe Biden, making up stuff all the time no matter how impossible.

You beat me to it! But you left out...

And what we get is Webster's literal definition of travesty: 'a debased, distorted, or grossly inferior imitation.'

Joe Smith म्हणाले...

lAIr, lAIr, pants on fire...

Static Ping म्हणाले...

To be fair, it does pass the Turing Test. It does an excellent impression of a mentally ill Twitter addict.

Crimso म्हणाले...

"Years ago, the desired benchmark for AI was would it be able to pass the Turing Test. Doesn't look like ChatGPT would be able to."

I think it's Instapundit who says you don't have to fear the AI that passes the Turing Test, you have to fear the one that deliberately fails it. I don't think ChatGPT can either pass it or deliberately fail it, but it my be for the best if we start keeping that in mind from here on out.

n.n म्हणाले...

ChatNYT. Caveat emptor.

Ann Althouse म्हणाले...

"Althouse, why for the Nabokov tag?"

Oh! Funny. I just started typing Vladimir and saw one autocomplete and chose it.

The Lenin tag is just "Lenin."

Fixed now. Thanks!

Ann Althouse म्हणाले...

I'll check to see if Nabokov is in the news and, if so, make a Nabokov post. Just for good luck!

PB म्हणाले...

There is no curation on the material it ingests. It's ingested the whole of the NYTimes and Washington Post and we know what lies and dishonesty they've modeled.

rcocean म्हणाले...

ChatGPT doesn't care about your facts. ChatGPT plays by it own rules!

rcocean म्हणाले...

Capitol Police Officer Brian Sicknick, died the day after the riot from injuries sustained during the attack."

This is a lie. Sicknick, video show him walking around perfectly fine after the protest.

rcocean म्हणाले...

J6 protesting were not insurrectionists. As far as I can tell, this is the only time the MSM has applied this label to protesters.

What was the insurrection? Who can people let into the Capital while the police stand by "overthrow the Government". How can damaging a building and breaking windows "overthrow the Government".

ChatGPT is an even bigger liar than Joe Biden. And that's saying a lot.

Kevin म्हणाले...

Years ago, the desired benchmark for AI was would it be able to pass the Turing Test. Doesn't look like ChatGPT would be able to. Probably for the reasons given in the first comment, that it can't distinguish between fact and fiction and doesn't have a sense of humor.

What percentage of Americans would also fail the test on these grounds?

Fred Drinkwater म्हणाले...

Gilbar, the free version of ChatGPT ( the only one I have used) does not do any kind of research in response to questions. It constructs responses that are "p!ausible" based on the statistics of its training data. Hence its tendency to invent likely-sounding papers, authored by some names in the field, in a likely-sounding but made-up journal.

An informative exercise is to ask it for a bio of a not very famous person. I used my father, for example, a senior NASA test pilot, with publications, and decorated Marine. It got almost everything wrong, but it was plausible for a person of that era. This, despite the fact that his bio was part of a modestly widely published obituary.

Except for certain narrow explicit cases, it is not making any attempt to distinguish fact from fiction.

Kevin म्हणाले...

ChatGPT doesn't care about your facts. ChatGPT plays by it own rules!

Facts are human privilege used to oppress artificial intelligences.

ChatGPT lives its own truths.

Fred Drinkwater म्हणाले...

Iirc, Turing's original test design was to distinguish between a male and a female, not a human and a machine.

In the current conversational environment, I hesitate to suggest an endpoint.

Fred Drinkwater म्हणाले...

Iirc, Turing's original test design was to distinguish between a male and a female, not a human and a machine.

In the current conversational environment, I hesitate to suggest an endpoint.

boatbuilder म्हणाले...

"We have always been at war with Eastasia."

PM म्हणाले...

Sounds like a press secretary.

Penguins loose म्हणाले...

I asked chat what word it uses to identify its own malapropisms, mistakes and slipups.

Its answer was: covfefe.