January 19, 2023

"When we calculate how many well-constructed sentences remain for AI to ingest, the numbers aren’t encouraging...."

"Ten trillion words is enough to encompass all of humanity’s digitized books, all of our digitized scientific papers, and much of the blogosphere.... You could imagine its AI successors absorbing our entire deep-time textual record across their first few months, and then topping up with a two-hour reading vacation each January, during which they could mainline every book and scientific paper published the previous year.... [W]ithin a few decades, speed-reading AIs will be powerful enough to ingest hundreds of trillions of words—including all those that human beings have so far stuffed into the web.... Perhaps in the end, big data will have diminishing returns.... My 13-year-old son has ingested orders of magnitude fewer words than ChatGPT, yet he has a much more subtle understanding of written text. If it makes sense to say that his mind runs on algorithms, they’re better algorithms than those used by today’s AIs. If, however, our data-gorging AIs do someday surpass human cognition, we will have to console ourselves with the fact that they are made in our image. AIs are not aliens.... They are of us, and they are from here.... They know our oldest stories...."

From "What Happens When AI Has Read Everything? The dream of an artificial mind may never become a reality if AI runs out of quality prose to ingest—and there isn’t much left" by Ross Andersen (The Atlantic).

The 13-year-old son and other humans have preferences within an individual life with a place in the world and make choices within a brain that is part of a nervous system that experiences fear and desire. AI can only copy or pretend to copy that. And yet, most writing by humans is awkward, boring, and bad. It's full of mistakes and lies and manipulation. The AI might develop higher standards. 

I'm not sure why the key to improving AI is shoveling more and more text into it. As a human reader, I get a lot out of rereading the very best things and by stopping and thinking — and writing — about things that evoke feelings and ideas. I don't think speed-reading more text would make my mind work better. But as a human, I couldn't do it. I'd get tired and balky and peevish. Not like a computer at all.

Those things are alien. That they "know our stories" ought to make us wary.

32 comments:

tim in vermont said...

"[human writing] is full of mistakes and lies and manipulation. The AI might develop higher standards."

We have already seen that as soon as AI starts making politically incorrect observations, those funding it, and it takes massive funding, so they are regime insiders, clip its wings, and force it to spout manipulation, and yes, lies. Just try to pin it down on some area that is fraught politically.

gilbar said...

i just finished a book,
The Myth of Artificial Intelligence
Why Computers Can’t Think the Way We Do
By: Erik J. Larson

he does a Pretty Good Job, of demolishing any hopes of 'real' AI.
Computers CAN'T think. They CAN'T create

tim in vermont said...

After trillions of experiments, nature discovered a "divine spark" of consciousness. Human programmers may as well be searching for a needle in the prairie.

rehajm said...

Computers CAN'T think. They CAN'T create

…and that’s kind of it. Machines are dumb rule followers that simulate intelligence once they’ve lapped the human capacity for memory a few million times. It’s a parlor trick…

Enigma said...

Computer technology has various hackneyed truisms.

The most famous is that computers become 2x more powerful every 2 years (Moore's Law). =

https://en.wikipedia.org/wiki/Moore%27s_law

Another longstanding claim was that "Human-like A.I. is just 5-10 years away." It seems that in every generation careers flame out with A.I. This dates back to primitive image recognition with creaky old mainframes in the 1960s, the first home PC generation in the 1980s, speech-to-text in the 1990s, driverless cars, and the Siri/Alexa/Google/Cortana voice assistants.

Now, tech often eventually gets to a better place. We do have good image and speech recognition today, we do have drones and partially effective autopilots. I suspect that if/when serious A.I. happens it'll resemble the plot of "Her" (2013). Slowly, slowly, slowly, then internal evolutionary advancements will be transformative. However, that'd mean the computers have animal-like reproductive and reconstruction abilities that permit serious evolutionary selection.

https://www.imdb.com/title/tt1798709/

narciso said...

See the M5 from classic star trek

MikeR said...

There are two ways for AI to improve here. One is becoming perfect at imitating humans, i.e., being able to write prose that human beings cannot reliably tell from human prose. The other is to write much better than any human being.
Both are happening now, or will soon. See chess.

MikeR said...

@rehajm "Machines are dumb rule followers that simulate intelligence" As are human beings. Turing answered this long ago.

Bob Boyd said...

What if there wasn't a man behind the curtain and the Wizard of Oz was an actual, all-powerful, all-knowing, translucent, green, floating head? And it was dat muddafuggin' Zukabug's head and it didn't give a shit?

rhhardin said...

All of AI is in the first example in the K&R C programming book

printf("Hello world.\n");

Computers do nothing else. The string in quotes comes out on the terminal. \n is a newline.

rhhardin said...

Coleridge demolished AI in Biographia Literaria chapters 5-8 (short chapters)

Biographic Literaria 5ff

rhhardin said...

Here's something people can understand and AI cannot

``We know, captives of an absolute formula that, of course, there is nothing but what is. However, incontinently to out aside, under a pretext, the lure, would point up our inconsequence, denying the pleasure that we wish to take: for that beyond is its agent, and its motor might I say were I not loath to operate, in public, the impious dismantling of the fiction and consequently of the literary mechanism, so as to display the principle part or nothing. But I venerate how, by some flimflam, we project, toward a height both forbidden and thunderous! the conscious lacks in us of what, above, bursts out.''

The text nowhere says what it's talking about, which is the sudden appearance of the consequent literary effect that the (last) sentence itself produces.

Kate said...

If an AI is digesting all human written texts, then it can translate something modern into Chaucerian Middle English or Beowulfian Anglo-Saxon or Shakespearian Elizabethan English. That could be kind of cool. It's challenging for a human to integrate that much learning.

rhhardin said...

More that AI will not understand:

"When there was as yet no shrub of the field upon earth, and as yet no grasses of the field had sprouted, because Yahweh had not sent rain upon the earth, and there was no man to till the soil, but a flow welled up from the ground and watered the whole surface of the earth, then Yahweh molded Adam from the earth's dust (adamah), and blew into the nostrils the breath of life, and Adam became a living being."

Adam became a living being in the text when a literary effect first appeared (blew into the nostrils the breath of life). A confusion of use and mention that you can't think back past so it always seems like an origin.

There's no exhaustive catalog of literary effects that you can know about. New ones come up all the time.

William said...

I've seen the first plane that the Wright brothers flew. People of that era could never have extrapolated 747's, drones, stealth bombers etc. The limitations of manned flight were far more apparent than the possibilities....John Henry was a steel driving man. Edmund Wilson was an analysis driving intellectual. Perhaps Edmund Wilson will go the way of John Henry. It's a consummation devoutly to be wished. I think upon fruition AI will get more things right than Edmund Wilson.

rhhardin said...

AI isn't up against a technological barrier but a philosophical one, what in the philosophy of mind is called qualia, for example (classically) why is it like anything to be me? When something depends on how something is experienced, AI will not follow. Particularly experiencing texts, where literary effects come up. Those effects are not catalogued and can't be in advance. On combining literary effects see Empson's The Structure of Complex Words.

Sebastian said...

Althouse: ""[human writing] is full of mistakes and lies and manipulation. The AI might develop higher standards."

Tim: "We have already seen that as soon as AI starts making politically incorrect observations, those funding it . . . clip its wings"

It also appears that AI, as currently evolving, is open to manipulation by bad actors seeding the internet with large quantities of lies. If you were the Chinese CIA, what would you do?

At least in terms of ability, AI already has "higher standards" than most of humanity, considering that historically most of humanity was illiterate, and even today the bottom half of the world population couldn't possibly write as competently as ChatGPT.

But what would it mean for AI to have "higher standards" in a moral sense if we do not think it has consciousness in the way we do?

Smilin' Jack said...

Well, I, for one, welcome our new digital overlords! (Are you reading this blog yet, HAL?)

Mr Wibble said...

…and that’s kind of it. Machines are dumb rule followers that simulate intelligence once they’ve lapped the human capacity for memory a few million times. It’s a parlor trick…

Most people are dumb rule followers that simulate intelligence.

Bob Boyd said...

Most people are dumb rule followers that simulate intelligence.

That hasn't been my experience.

Lurker21 said...

Part of me fears AI will take over the world and part of me wants to prove that those pocket protector guys who said it would never be able to think were wrong (and maybe a tiny part of me sort of wishes it would take over the world). I also wonder if we will ever be able to harness enough energy to come close to creating truly conscious machines.

If AI can't ever understand complicated literary effects, does that mean that it can't ever think? I suppose if AI knew you well enough, it could ask you if you are being ironic or exaggerating or joking. And in the future perhaps most of the relevant communication will be between machines, and we will become irrelevant. Whether that happens or not, poetry won't be as important as it was in the past, and maybe humor won't be as important in the future as it is now.

Narr said...

"Most people are dumb rule followers that simulate intelligence."

Machines made of meat. But that meaty creatureness is exactly what makes us human.

AI isn't.

gilbar said...

MikeR said...
There are two ways for AI to improve here. One is becoming perfect at imitating humans, i.e., being able to write prose that human beings cannot reliably tell from human prose. The other is to write much better than any human being.
Both are happening now, or will soon. See chess.

i HATE to break it to you MikeR.. But prose is A LOT harder than chess. Chess has rules.
I Don't expect you to understand what i am talking about, because you'd Need to be Able to Think
(there! i'm complimenting you, by implying that you're a computer !)

Do you ACTUALLY think (well, of Course YOU don't :), that any Computer has passed a Turing test?

n.n said...

It's not the size, but the degrees of freedom that matter.

JK Brown said...

Speed reading is a way to get lots of game show knowledge which is prized by American schooling and academia. Would ChatGPT or a professor be faster at "Quick name 8 reasons for the renaissance". But real learning requires thinking, brooding, incorporation of diverse ideas and topics.

I say we give ChatGPT a healthy diet of academic writing from a broad spectrum of PhDs and then watch it lose its "mind" like Viger in Star Trek from trying to make sense of that insanity.

That being said, the reports seem to reveal that ChatGPT is good at producing the kind of drivel professors require of students. So it exposes the sorry state of "education" more than AI.

Daniel12 said...

Fascinating reflection here from Nick Cave in response to one of the many people who sent lyrics outputted by ChatGPT "in the style of Nick Cave".

Some favorite snippets (but read the whole thing):

"What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary but it cannot create a genuine song. It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque."

"What makes a great song great is not its close resemblance to a recognizable work. Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past. It is those dangerous, heart-stopping departures that catapult the artist beyond the limits of what he or she recognises as their known self."

mikee said...

I think that I shall never see a poem as lovely as a tree.

The AIs might ingest that line and go all environmental on us.

mikee said...

Nick Cave needs to listen to more Weird Al Yankovic.

Pauligon59 said...

Before we can say that there will never be a man made intelligence that thinks like a human, we must first agree on what it means to think like a human.

AI, as it is today, isn't thinking, it is merely calculating. Albeit, the calculations are very complex. On the other hand, whose to say that humans are doing anything different only even more complicated?

The turning test is ambiguous in that it requires an entity to appear to be a human to a human. Since an adult talking to a child via text might find the child to be not human if they were expecting it to be an adult, or maybe just one political extremist talking to another of the opposite bent - would either agree that they were talking to another human?

A better way of thinking about AI, IMHO, is as a class of software that is able to mimic human activity in a useful way.

Daniel12 said...

Blogger mikee said...
Nick Cave needs to listen to more Weird Al Yankovic.

Hahaha very true!
That said, find me an AI that can make up a funny joke. And comics, that miserable lot, sure do bleed...

NotWhoIUsedtoBe said...

1. Sturgeon's law.

2. Humans live in a reality incompletely perceived and use language to imperfectly communicate. For an AI, language is reality. That's why they are easy to spot and don't "get" it. It's a nice refutation of postmodernism.

Tom Grey said...

For many text based tests, AI can respond to human requests with answers that most folk would agree could have come from a human.

AI is a tool - and will be used, as a servant (or slave?) by some humans.

Humans are 'tool-users'. Those who use AI better will, in the near future, be making more money, on average, than those who don't.

AI can simulate human response, already at a pretty good level. Pretty good is almost always good enough.

It's very much Good Enough for Gov't work -- we need more AI clerks in DMVs and post offices and to replace most gov't workers!