December 12, 2023

"Dictionary.com has chosen 'hallucinate' as its 2023 Word of the Year, but not in its traditional, trippy sense."

"Instead, Dictionary.com is highlighting the word’s increased usage among users and critics of artificial intelligence (AI) programs, who have adopted the term to describe the inaccurate and often outlandish outputs that chatbots and other prompt-based AI programs attempt to present as fact...."

18 comments:

Fred Drinkwater said...

"Hallucinate" is a poor term for that AI behavior. It implies, incorrectly, that there is some "normal" "mental" process, which is derailed, producing these hallucinated responses.

None of that is happening.

The use of "hallucinate" is marketing.

n.n said...

It's the burden of human intelligence to discern the quality of correlations presented by both physical and artificial models.

robother said...

I don't converse with AI so I'm unfamiliar with this usage. Is it an invocation of HAL, the 2001 onboard computer?

stutefish said...

I had the privilege of attending a talk by the DOD's head of technology and AI, and "hallucinate" was exactly the term he used to indict the current crop of AIs that the DOD is considering.

tim in vermont said...

I. think that there are definite analogues between how AI "thinks" and how humans "think" about certain things. Some of the things that AI does count as part of thinking. I kind of agree that when it can do math without simply regurgitating, it will be far closer to what humans do.

"Anyone who cannot cope with mathematics is not fully human. At best he is a tolerable subhuman who has learned to wear shoes, bathe, and not make messes in the house." - Robert Heinlein, Starship Troopers

tim in vermont said...

"None of that is happening."

I am no longer sure of that. We might not use the same processes exactly to choose our language, but we use something analogous, I am convinced. Neurons are not, at the end of the day, magical.

Michael said...

My word of the year. kakistocracy

Tom T. said...

"Hallucinate" is a good analogy. The AI is presenting as fact something that does not exist or did not happen. Think of the lawyer who asked for case law on a particular topic - there wasn't any, so the AI wrote some.

Fred Drinkwater said...

tim,
LLMs relate language fragments to other language fragments using statistical relationships among syntactic elements. They have no process to attach, or relate, semantic elements to those language fragments. And, AFAIK, the statistical relationships are not weighted according to any sort of correctness metric. (Correctness, as in, correspondence to the real world.) That could be done, but would require a huge amount of human judgement and input. We could define a new profession to do that; call them, say, "teachers".

To quote the popular philosopher Qi-Gon Jin, "The ability to speak does not make you intelligent."

Lem the artificially intelligent said...

I just want to have an authentic hallucination for once.

Narr said...

I'm old enough to remember when GIGO was a cutting-edge bit of jargon.

Now, IMO, many AI enthusiasts (not alone, but primarily) have forgotten that, and decided that what is needed is more G. AI can pile higher and deeper than many university faculties combined.

mikee said...

My next bumper sticker: "Hallucinate cromulently, until you grok."

tim in vermont said...

"The ability to speak does not make you intelligent."

True, but it's rather difficult to be intelligent without it. It's a step in the process. A lot of the comments and posts you read on-line consist of what Scott Adams calls "word thinking." Stringing words together into things that sound OK, if you don't think about them. I agree with Scott that it's not exactly "thinking," but it's in the neighborhood.

tim in vermont said...

"The ability to speak does not make you intelligent."

True, but it's rather difficult to be intelligent without it. It's a step in the process. A lot of the comments and posts you read on-line consist of what Scott Adams calls "word thinking." Stringing words together into things that sound OK, if you don't think about them. I agree with Scott that it's not exactly "thinking," but it's in the neighborhood.

tim in vermont said...

"My word of the year. kakistocracy"

You're not *that* Michael, are you?

JRoj said...

Persistent hallucinations is the term given so the phenomenon is more easily grasped. Simple as that. The truth is what goes on inside the black box of AI isn’t fully understood and won’t be. If you don’t close the loop with human analysis after the AI portion of the work is done, the humans are to blame for letting AI hallucinations pass as reality or truth.

mikee said...

I once had a hallucination due to sleep deprivation. In discussin this experience with other people over the years I've learned that my hallucination was a very common one when sleep deprived. Lotsa people see someone who is not there, after a few days without hitting the pillows. So if my hallucination is the same as many other hallucinations, is that not evidence that hallucinations are real, and understandable, and have some simple cause/effect relationships with body chemistry and behavior?

Hallucinations are not imaginary, they are real.

mikee said...

I once had a hallucination due to sleep deprivation. In discussing this experience with other people over the years I've learned that my hallucination was a very common one people experience when sleep deprived. Lotsa people see someone who is not there, after a few days without hitting the pillows.

So if my hallucination is the same as many other hallucinations, is that not evidence that hallucinations are real, and understandable, and have some simple cause/effect relationships with body chemistry and behavior?

Hallucinations are not imaginary, they are real.