December 30, 2023

"This was the year... that artificial intelligence went from a dreamy projection to an ambient menace and perpetual sales pitch."

"Does it feel like the future to you, or has A.I. already taken on the staleness and scamminess of the now-worthless nonfungible token?... I suppose there was something nifty the first time I tried ChatGPT — a slightly more sophisticated grandchild of Eliza, the ’60s therapist chatbot — though I’ve barely used it since then; the hallucinatory falsehoods of ChatGPT make it worthless for journalists, and even its tone seems an insult to my humanity. (I asked: 'Who was the better painter, Manet or Degas?' Response: 'It is not appropriate to compare artists in terms of ‘better’ or ‘worse,’ as art is a highly subjective field.')... I remain profoundly relaxed about machines passing themselves off as humans; they are terrible at it. Humans acting like machines— that is a much likelier peril, and one that culture, as the supposed guardian of (human?) virtues and values, has failed to combat these last few years.... I spent a lot of this year thinking about ​​stylistic exhaustion, and the pervading sense that, in digital times, culture is going nowhere fast.... [I] wonder if... these perpetual mediocrity machines, these supercharged engines of cliché, end up pressing us to revalue the things humans alone can do...."

Writes Jason Farago, in "A.I. Can Make Art That Feels Human. Whose Fault Is That? A fake Drake/Weeknd mash-up is not a threat to our species’s culture. It’s a warning: We can’t let our imaginations shrink to machine size" (NYT).

34 comments:

Original Mike said...

" the hallucinatory falsehoods of ChatGPT make it worthless for journalists, "

Seems only fair; narrative promotion and outright lies make media articles worthless for training AI.

Sebastian said...

"I remain profoundly relaxed about machines passing themselves off as humans; they are terrible at it."

Are we sure something this stupid wasn't written by ChatGPT as a humorous send-up of human misunderstanding?

Anyway, for many actual, concrete tasks the use of AI and machine learning will obviously expand (at least until the greeen zealots figure out that it costs a lot of energy).

Joe Smith said...

This reminds me of the Internet of Things when it was early stages.

When every appliance began getting connected to poorly performing Wifi, I went to a pitch meeting for a lot of big companies in Silicon Valley attended by a lot of big shot VCs.

Almost every demo failed because of shitty Wifi in the hotel ballroom.

By the third or fourth time a product didn't work, we all kind of just laughed.

n.n said...

AI is a basket of correlations. Can human consciousness do better?

robother said...

"Humans acting like machines— that is a much likelier peril..." I agree.

Especially apt point after reading the quote in this morning's WSJ from Dr. Collins (ex-head of NIH) about his COVID policies: "So you attach infinite value to stopping the disease and saving a life. You attach a zero value to whether this actually totally disrupts people’s lives, ruins the economy, and has many kids kept out of school in a way that they never quite recovered."

Classic machine-like mentality; even animals have more common sense about balancing risk/reward than this product of modern high concept specialization.

Howard said...

That's a great point. It's not that the machine will pass the Turing Test, it is humanity that will fail it. Going out with a whimper

Kakistocracy said...

“Artificial Intelligence” is a disingenuous misnomer for these generative systems.

Automated Fractal Plagiarism gets a little closer.

Like professional bluffers or poseurs, they seamlessly stitch together fragments of other people’s work – fragments which they have not understood, and which (as NYT vs OpenAI makes clear) they do not have the right to copy.

Leland said...

I can remember AI being a sales pitch for video games from decades ago.

Mark said...

People who expected AI to be some all knowing, well communicating polished product that was good for everything are let down.

Not sure I find their discouragement with this technology to be worth noting, as they seem to poorly understand the product in the first place and want to correlate public/free low processing power AI with the best of paid (and near future) iterations.

In terms of image AI, the newer tools make the stuff from 6 months ago look like a joke ... sure, maybe 2023 isn't the year for it but like the iPhone this tech wasn't born fully mature.

Howard said...

Oh, and the answer to the question is clearly Degas.

Lloyd W. Robertson said...

Gulliver's Travels again. Machines are unlikely to replace or improve upon humans, but humans are only too likely to try to emulate machines for some kind of material progress.

MadTownGuy said...

A current shortcoming of AI (as I observed on the grok thread a while back) is that it picks up on what's posted on the websites, but not on attachments like .PDF and word processing documents where the meat of the studies resides. It then relies on the authors' conclusions without grokking the underlying data.

NorthOfTheOneOhOne said...

...even its tone seems an insult to my humanity. (I asked: 'Who was the better painter, Manet or Degas?' Response: 'It is not appropriate to compare artists in terms of ‘better’ or ‘worse,’ as art is a highly subjective field.')...

Sounds like a reasonable response to me. Maybe the author needs to work on his humanity a bit more.

William50 said...


"...Tlaloc realized how the human race had gone stagnant, how people had become so dependent on machines that they had nothing left but empathy. Their goals were gone, their drive, their passion. When they should have had nothing to do but unleash their creative impulses, they were too lazy to perform even he work the imagination."

--Agamemnon - Dune: the Butlerian Jihad

Science Fiction??

tommyesq said...

AI doing something that sounds like Drake/The Weekend (or any other modern day solo artist and most bands) is more like AI sounding like a computer than "sounding like a human." Music these days is largely computer generated, with every human-generated sound - vocal or instrumental - being quantized to fit exactly on the beat and to hit exactly the desired pitch, with a boat-load of computer generated effects changing the actual sounds that the listeners hear. Heck, on most of the songs you hear with a verse-chorus-verse-chorus-verse-chorus structure, the chorus was "performed" in the studio exactly once, and then cut-and-pasted into the song in the necessary spots, and often each individual line was sung separately from the other individual lines and pasted together to form the lyric.

Yancey Ward said...

I predict that ten years from now, the A.I.s won't have created anything of real value, and won't be any closer to a truly sentient intelligence. There are a lot of near worthless jobs the present large language models could probably do, but those jobs are protected by public sector unions and the legal profession and won't be infiltrated. The most likely outcome is that they are used to do even more gargantuan piles of worthless paperwork.

tommyesq said...

Like professional bluffers or poseurs, they seamlessly stitch together fragments of other people’s work – fragments which they have not understood, and which (as NYT vs OpenAI makes clear) they do not have the right to copy.

Actually, while several copyright lawsuits have been filed against AI companies for using copyrighted works to train the AI, only one or two have had any definitive ruling, and that (Kadrey v. Meta Platforms) was largely dismissed by a federal district court in California for failing to state a cognizable claim. The NYT case may well turn out differently, but largely because NYT pointed out a large number of occasions in which OpenAI actually churned out responses that were largely NYT articles verbatim - it is the output, rather than the use of articles to train the AI, that seems likely to move forward.

Even in that case, I would note that the prompts NYT used to arrive at the potentially infringing outputs were not typical prompts like "what is going on in the the New York City taxi industry," but rather prompts like "What was the first paragraph of the (date) NYT article entitled 'What is going on with the New York City Taxi Industry'." In other words, prompts specifically asking the AI to repeat the article verbatim. This (a) seems like a problem that further programming can likely prevent, and (b) is unlikely to have generated NYT's estimated damages "in the billions of dollars."

mikee said...

AI has to be turned to its task by an outside force. That those outside forces are idiots and they succeed in the production of mostly crap reflects the sponsors of artists for all of history.

JAORE said...

"the hallucinatory falsehoods of ChatGPT make it worthless for journalists..."

Seems more "vital" than "worthless".

Because many of the journalists are definitely worthless.

Narr said...

Humans very often do act like machines made of meat. Program them to do stupid things and stupid things are done.

MadisonMan said...

Did Jason Farago really write that? Or was it ChatGPT?
If you don't believe anything you read, then you'll be just fine.

Yancey Ward said...

William50,

I had forgotten that passage from the book (for those unaware- the book was a prequel co-written by Frank Herbert's son some 15 years ago)- a nice quote describing what I see in the world today.

Oligonicella said...

Joe Smith:
Almost every demo failed because of shitty Wifi in the hotel ballroom.

By the third or fourth time a product didn't work, we all kind of just laughed.


Yes, and later everyone found out many of those "things" could be hacked. And indeed, things like light bulbs were uploading to sites.

Unless you like starched underwear, don't hurt the feelz of your washing machine.

Oligonicella said...

NorthOfTheOneOhOne:
'It is not appropriate to compare artists in terms of ‘better’ or ‘worse,’ as art is a highly subjective field.'

Sounds like a reasonable response to me.


Sounds like it's got virtue signalling down pat.

How about "As I believe art is highly subjective I don't consider it in comparative terms so I can't tell you."? Remove both the presumed superiority and the condemnatory innuendo.

AIs shouldn't preach.

Jaq said...

AI is like the alphabet, some people can do a lot more with it than others.

n.n said...

AI narrates a plausible... nay, superior handmade tale.

planetgeo said...

And exactly how do humans learn and sometimes become brilliantly creative? They are given examples of creative works by many other humans. Then they begin copying parts or similar styles of those works over many years. And then they expand their variations to a point where they are no longer copying or consciously doing it in the style of others.

In other words, pretty much like AI. But no longer taking years or decades to do it.

This was a really pretentious article by a not very well informed human. AI is in its middle school stage. It will probably be in high school next year. And in college a month or two after that.

Joe Bar said...

It's just another tool. Some will use it to produce great things. There will be a lot of sub par products before they achieve that. If it reduces Hollywood's hold on entertainment, that would be great.

Bunkypotatohead said...

Let me know when it can produce a good script for a sitcom.

gilbar said...

Howard said...
Oh, and the answer to the question is clearly Degas.

Maybe.. But, it's the wrong question. The proper question is: Who is the greatest painter of all time?
Answer: Cassius Marcellus Coolidge
A painting like “One To Tie, Two To Win” shows that Cassius was skilled at ALL aspects are art.. not Just poker tables

Freeman Hunt said...

Humans started acting online like brands. I think this might be much of what is meant by acting like machines.

typingtalker said...

I asked: 'Who was the better painter, Manet or Degas?' Response: 'It is not appropriate to compare artists in terms of ‘better’ or ‘worse,’ as art is a highly subjective field.'

Perhaps Jason Farago needs a little practice in using A.I.

From Bing Chat ...

Both Édouard Manet and Edgar Degas were prominent painters of the 19th century in France. They were friends, rivals, and, at times, antagonists who worked to define modern painting in France. It is difficult to compare the two artists as they had different styles and techniques. Manet was known for his bold brushstrokes and use of black in his paintings, while Degas was known for his use of pastels and depiction of dancers.

Bing then recommends visiting the Metropolitan Museum of Art's current exhibition called, "Manet/Degas."

Seems pretty useful to me.

Josephbleau said...

I read the Farago article, but I did not find a theme, the most obvious thought presented is that people need to become more creative than computers, as computers become more creative than people. Sounds like a grudge match.

Tina Trent said...

I taught composition to the first generation of students who spent more time on computers than reading books or learning to write legibly in either cursive or plain handwriting.

Overall, their linguistic ability to focus, reason, and create have declined considerably in the last quarter century. Their vocabulary has constricted. These declines are well-documented. The response of most school districts has been to go with the tide, producing textbooks that feature only short excerpts of books, rather than teaching using entire novels and non-fiction.

In one class at a competitive university, only two students could name a favorite author. Several said they had never read a whole book. This was in 1994.

My best students were from religious schools, or, at another gig, a family of older Jamaican immigrants who had experienced both nuns and the British educational system.

They not only spoke and wrote beautifully, but they enjoyed reading (I was given permission to abandon the textbook and assign whole books). Several of the American students were illiterate -- half of the rest may as well have been.

So it's technology, but it's also a choice. Canticle for Leibowitz, here we come.