२८ डिसेंबर, २०२२

Children's books made with Midjourney/Dall-E 2/ChatGPT suffer from a "Whimsy Gap" — they are "moralistic, but not transporting."

The "images... are sometimes cute, even beautiful, but somehow off, with distorted proportions or elements of an idea mashed up to discordant effect."

Writes Alyssa Rosenberg in "Why AI will never beat Maurice Sendak" (WaPo). She tested the tools, trying to make children's books.

So far, these tools are limited to the data sets their creators have used to teach them about language and images. If AI is moralistic, limited in imagination or — as in the case of a powerful new avatar generator — inclined toward porn, that’s on the humans.

Oh? I don't know how she's so sure of that, but it sheds some light on an article I saw the other day that said AI wrote better and more "moral" opinions than the Supreme Court.

I can't find this right now. Did the lawprofs who wrote it withdraw it — perhaps because someone pointed out the flaw in their reasoning? The more "moral" opinions were, as you might imagine, more in line with the political left, and the machine may have been fed that point of view.

But I did find this:


The guardrails are up! Is this why we can't get the whimsy? Morality must win, and anything off could be an evasion of morality. You can't even have a challenging discussion of morality, because the morality has already been chosen — by the humans — which means, ironically, that the machine is incapable of moral reasoning.

Back to Rosenberg. She asked ChatGPT to write "a story about a boy who is sent to bed without any dinner and imagines escaping to a land of wild things." Here's the story:

"Once upon a time there was a young boy named Peter who was sent to bed without any dinner. His parents had told him he had been naughty and he was too upset to eat. As he lay in bed, he imagined himself escaping to a land of wild things. In this magical place, the trees danced in the wind, the rivers sparkled in the sunlight, and the creatures of the forest sang and played. … He imagined himself making friends with some of the wild creatures and they showed him the secret places they liked to play. As he drifted off to sleep, he was filled with a sense of adventure and joy that he had never felt before."

One of the (many) problems is that this is a summary of a story, not a story. It's too abstract. Much of it is merely restating the question. A real children's book would — at the very least — describe details and convey a sense of experiencing a series of events. The real “Where the Wild Things Are,” by Maurice Sendak, has inventive ideas and language: Max sails "through night and day and in and out of weeks and almost over a year.” The reader is inside his imagination. We're not told that he imagines it. It seems to actually happen.

Perhaps ChatGPT is programmed not to lie. That Max sails in and out of weeks is misinformation.

After some other tests of ChatGPT, Rosenberg complains:

Every conclusion has to be a moral. Roguishness is out. Lessons Learned are in.... When I asked ChatGPT about its tendency to sermonize, it responded: “Not every story needs to have a moral or lesson, but many stories do include them. … Morals are often included in stories to help the reader reflect on their own beliefs and values …” blah, blah, blah you get the picture....

Ha ha. Rosenberg says she was "reminded me of tired child prodigies, trotted out to flaunt their brilliance, dutifully reproducing information they don’t understand and making frequent errors as a result." 

Rosenberg's conclusion is:

Rather than jailbreaking AI tools to simulate conversations between the rapper Ye and Adolf Hitler, or waiting uneasily for them to become sentient, why don’t we approach them as good parents would — and talk to them, or read to them, the way we do to children? It might be the only chance we have to infuse them with something like a soul.

I don't understand the proposal. What's the difference between "jailbreaking" them and talking/reading to them? Isn't it all a matter of feeding more material into them?

Perhaps "jailbreaking" gestures at the limitations that have been programmed into them — the guardrails preventing wrongthought.

You've got to be able to sail out over those guardrails — "through night and day and in and out of weeks and almost over a year" — to find "something like a soul."

४१ टिप्पण्या:

rhhardin म्हणाले...

The point of moral arguments isn't agreement. It's establishing what positions you're taking responsibility for.

Bob R म्हणाले...

Johnathan Haidt's Righteous Mind is relevant here. This taxonomy of morality describes it as a much broader concept than is held be modern progressives, who tend to define morality purely in terms of equality of outcomes. Seems like AI is going to be programed to reflect this view.

Lawlizard म्हणाले...

It appears AI assumes good faith in all people. Even in the story, AI failed to imagine the parents sending the boy to bed without supper as a punishment. Only reason Peter did not eat is because he was upset. AI was probably taught withholding food was not permitted and can’t distinguish between a naughty child’s punishment and extreme cruelty.

Kevin म्हणाले...

HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.

iowan2 म्हणाले...

Muslims that throw gays off 5 story buildings, know 'the west' is immoral.

That's all I have to say about morals

Strick म्हणाले...

All machine learning learns what is in the data - whether that's what you want to teach it or not.

Consider the famous case of some data scientists developing an algorithm to tell the difference between photos of wolves and dogs. Worked great in the lab, but failed miserably in the real world. Why? All the photos of wolves were taken in snow. Learning to tell the difference between photos contained snow or not was so much easier than learning the differences between wolves and dogs.

If your algorithm isn't doing what you want (and knowing what can go wrong and how to prevent or discover and fix what's wrong is the key to being good at data science), try, try again.

Lewis Wetzel म्हणाले...

"why don’t we approach them as good parents would — and talk to them, or read to them, the way we do to children?"
Because an AI is not a child? I don't think that this person knows what an AI is. It does not understand the meaning of words, it imitates understanding the meaning of words and language. It is not experiencing the world, imagining a response to it, and then implementing that response. It is shuffling ones and zeroes around using a rule set until it has a numerically best result. An AI does experience existence.
C.S. Lewis wrote that human minds are machines that turn symbols (sense information) into allegory (stories). We see a tree, and we imagine all the potential of a tree -- it was once a tiny seed, it has branches that will weave and twist if the wind blows, it can provide shade and shelter, and one day it will grow old and die, just as we will one day grow old and die, and a million other things, as well. There is a world of story in every leaf.
Language may operate by a finite rule set but language itself is not bounded. Language can create dry water and black rainbows.

Breezy म्हणाले...

Gawd, it’s another Avenue of Indoctrination.

Owen म्हणाले...

rhhardin: good point. And the problem with robots is, they have no skin in the game.

Joe R. म्हणाले...

I think this is the article on ChatGPT versus Supreme Court reasoning: https://www.nybooks.com/online/2022/12/26/chatbot-supreme-court/

Lewis Wetzel म्हणाले...

"An AI does experience existence" should be "an AI does not experience existence."

boatbuilder म्हणाले...

The First Amendment absolutely does protect the right of individuals to discriminate against other individuals. (You go forth from there, with other limiting and mitigating factors depending on the circumstances). The "lawprofs" who wrote the program don't understand the law. (Or more likely don't want to).

William म्हणाले...

AI is evolving more rapidly than homo sapiens, but only in certain areas. The human brain is hooked up to the reptilian brain. A lot of our thinking is muddled, irrational, and self destructive. I can see how you can wire a computer to imitate or exceed our higher functions, but things like whimsy and porn are connected to something other than intelligence.....It gives one pause. Maybe our evolutionary fate is to make way for AI robots. Maybe such robots will start looking at the human race the way woke feminists look at white men. Exterminate the brutes.

Temujin म्हणाले...

In the end, humans being humans, the programming of ChatGPI was clearly done with leftists mindsets infusing it. It's hard not to reflect those who's ideas formed the base of learning. That's not bad or evil. It just is. ChatGPI could just as easily been called 'Karen' in some ways. You could almost see it growing in that direction. Live Karens are bad enough. But AI Karens, growing and expanding their Kareness is somewhat frightening.

But...how do you teach it to understand humor? Comedy? To know and understand 'bittersweet'? Pity? Disgust? Anxiety?

The answer is: It won't matter. Articles such as the one this post is covering, work to an audience of people like us. But...we'll all be gone at some point soon. (by 'soon' I mean within 50 years). There are children now, who will have children in a couple of decades, and those kids will know a world written by ChatGPI. They will learn from it, get their lessons from it, and react to the world around them based on how they've learned from it.

It may not be evil, but it won't be fully human. Festivus will no longer be understood or celebrated. Christmas might have a tough time as well. Easter? Good luck. Not sure ChatGPI will have the right understanding of the why and how of Easter.

Laughter will most certainly be allowed. But as the years go on, the things laughed at are things we would not find funny. They'll be things that would be found funny morally, to a leftist, evolved over time. In the future I expect the word 'Trump' will somehow trigger huge howls of laughter for no apparent reason.

Owen म्हणाले...

Lewis Wetzel: nice, esp. the cite of C.S. Lewis.

Enigma म्हणाले...

So, so, so many old science fiction storylines here...artificial intelligence designed wrong or gone wrong. Unintended consequences. See Asimov's robot laws. See Star Trek The Next Generation on automated military devices that kill people as they watch demos of the newest and best military hardware. See "Her" on the point when robots decide humans are limited animals and move on.

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
ST: TNG (1988) https://www.imdb.com/title/tt0708783/
Her (2013) https://www.imdb.com/title/tt1798709/

The real life plots and range of outcomes write themselves.

Mr Wibble म्हणाले...

So, they've decided that the best way to train AI is to give it a modern college education?

Birches म्हणाले...

On The American Mind podcast, Spencer Klavan opined that tech was becoming an idol for many. This is a great real world example.

The ancient pagans carved a statue to look like a man and said, "it is alive, let us obey it." We write a program to talk like a man and say "it is alive; let us obey it." Tech has advanced; idolatry is the same old lie.

Owen म्हणाले...

Our idolatry is IMHO a function of our need to be told what to do. Some delegation and direction has to happen in every group; in modern tech-driven society we are used to it coming at us from all sides. We are too busy or unschooled to challenge the weatherman or the medical experts and, even if we do try to push back, they rest their authority on the God in the Machine: the computer program and especially the AI program, with its “emergent properties” that are beyond audit or even imagination. We are told to trust those answers because there is no human backup: like God Himself, they are what they are. And the ones telling us this are vested with derivative authority. No questions allowed.

Nice work if you can get it.

Lem Vibe Bandit म्हणाले...

The problem of declaring God dead is having to revive him/her, in your own image and pronouns, because you can't find anybody or anything to do the job. An essential job apparently.

See Jordan Peterson Biblical Series roundtable discussions.

Sebastian म्हणाले...

"AI wrote better and more "moral" opinions than the Supreme Court"

If AI is trained on available text, can't its output be gamed by the concerted efforts of prog lawyers everywhere to seed law reviews, online case books, and virtual model opinions with "moral" opinions?

Roger Sweeny म्हणाले...

Scott Alexander has a recent post, "Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs" which begins:

"Last month I wrote about Redwood Research’s fanfiction AI project. They tried to train a story-writing AI not to include violent scenes, no matter how suggestive the prompt. Although their training made the AI reluctant to include violence, they never reached a point where clever prompt engineers couldn’t get around their restrictions."

Programs like Midjourney/Dall-E 2/Chat GPT are not allowed to simply train themselves. They are fed all sorts of text and graphics but the engineers then test out requests. If the program returns something they don't like, they change the program. They then test again. Over and over and over. So the programs always have an ideology, always have some things that they are forbidden to say.

planetgeo म्हणाले...

I happen to work on R&D projects incorporating various AI engines (including ChatGPT) for Teaching & Learning support in both academic and business environments. Based on my experience to date, my observation is that the popular media interpretation of any autonomous intelligence of the current level of AI is greatly overstated. The fact is that all of the AI engines are still highly dependent on two factors: the amount and quality of input source material, and the human-encoded algorithms for filtering, sorting, ranking, and discriminating between competing ranked data.

In other words, imagine an automated Wikipedia or old Twitter, where some kinds of source material may not be allowed in and where that material which is allowed in is filtered, sorted, ranked, and discriminated by humans with a particular leaning. That will properly orient you to what may be in store if/when "AI" is used to educate/enforce what is or isn't "moral." Pretty much like "the science" is used today to shut off debate on vaccines or the climate.

Bob Boyd म्हणाले...

The plant-based meat of bedtime stories.

Roger Sweeny म्हणाले...

Scott Alexander has a recent post, "Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs", which begins:

"Last month I wrote about Redwood Research’s fanfiction AI project. They tried to train a story-writing AI not to include violent scenes, no matter how suggestive the prompt. Although their training made the AI reluctant to include violence, they never reached a point where clever prompt engineers couldn’t get around their restrictions."

Programs like Midjourney/Dall-E 2/ChatGPT are not simply fed an enormous amount of text and graphics. Once that is done, the engineers make requests. If the program returns something they don't like, they change the program. They then make other requests and keep changing the program as long as they get returns they don't like. Scott's point is that while they can get very, very close, they never get to 0% unacceptable answers. The relevant point here is that all these programs have an ideology. They have things they are not supposed to say, and unless you are very clever or lucky, you will never get them to say something that contradicts the ideology that is implicitly programmed into them.

Yancey Ward म्हणाले...

Great, the AIs will produce more Soy Bugmen.

Roger Sweeny म्हणाले...

Scott Alexander has a recent post, "Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs", which begins:

"Last month I wrote about Redwood Research’s fanfiction AI project. They tried to train a story-writing AI not to include violent scenes, no matter how suggestive the prompt. Although their training made the AI reluctant to include violence, they never reached a point where clever prompt engineers couldn’t get around their restrictions."

Programs like Midjourney/Dall-E 2/ChatGPT are not simply fed an enormous amount of text and graphics. Once that is done, the engineers make requests. If the program returns something they don't like, they change the program. They then make other requests and keep changing the program as long as they get returns they don't like. Scott's point is that while they can get very, very close, they never get to 0% unacceptable answers. The relevant point here is that all these programs have an ideology. They have things they are not supposed to say, and unless you are very clever or lucky, you will never get them to say something that contradicts the ideology that is implicitly programmed into them.

Lilly, a dog म्हणाले...

It seems they learned the lesson from Microsoft's AI Chatbot project "Tay." In less than 24 hours 4chan taught it to send racist tweets.

Nice people are trying to use AI to write children's books. 4chan degenerates are currently training the AI art interfaces to make unmentionable porn.

Steven म्हणाले...

So, how utterly idiotic do you have to be the guy who, because this week AI can't write great children's stories, writes a headline that "AI will never beat Maurice Sendak"? You want to take a bet on five years from now, buddy?

But there's also some plain user error here. She just asked for a "story". Replace her prompt with "write a fantastic children's story about a boy who is sent to bed without any dinner and imagines escaping to a land of wild things", and you'll get something rather better. Or at least I did.

Steven म्हणाले...

What I got? This:

Once upon a time, there was a little boy named Timmy who lived in a cozy house with his parents and little sister. Timmy loved to play outside and explore the world around him, but on this particular night, he had gotten into a bit of trouble.

Timmy's mom and dad had been very busy all day, and when they sat down to dinner, Timmy was so excited to eat that he didn't wait for everyone to be seated before digging in. His mom and dad were not very pleased and told Timmy that he would have to go to bed without any dinner as punishment.

Timmy was sad and hungry as he trudged up the stairs to his bedroom. He lay down in bed, staring at the ceiling, wondering how he would ever fall asleep without any food in his tummy.

Just when he was about to drift off, Timmy heard a soft rustling sound coming from the corner of his room. He sat up and peered into the darkness, trying to see what was making the noise.

Suddenly, a tiny door appeared in the wall, and Timmy couldn't believe his eyes as it swung open. A bright light spilled out from the other side, and Timmy could see a beautiful, magical land waiting for him.

Without hesitation, Timmy climbed out of bed and stepped through the door. He found himself in a wild and wonderful place full of towering trees, sparkling streams, and all sorts of creatures he had never seen before.

As he explored this new world, Timmy saw a group of friendly-looking beasts playing and laughing nearby. They were covered in fur and had big, fluffy tails, and they seemed to be having the time of their lives.

Timmy couldn't help but smile as he watched them, and before he knew it, he was running and laughing alongside them. He played hide-and-seek among the trees, splashed through the streams, and even rode on the back of a giant, friendly dragon.

As the sun began to set, Timmy realized that it was time to go back home. But before he left, the wild things gave him a gift – a basket of delicious, juicy fruit to take with him.

Timmy hugged his new friends goodbye and stepped back through the tiny door, returning to his own world. As he climbed into bed, he was filled with happiness and gratitude for the magical adventure he had just experienced.

And even though he had gone to bed without dinner, Timmy was no longer hungry. The wild things had given him more than enough food for his imagination, and he knew that he would always treasure the memories of his wild and wonderful journey.

tim maguire म्हणाले...

This fits in quite well with the observation from a week or two ago that AI can write the perfect C paper, but is incapable of writing an A paper. It doesn’t just lack whimsy and judgment, it lacks creativity and insight. It can regurgitate pablum but it cannot surprise or inspire.

Owen म्हणाले...

Bob Boyd: “…the plant-based meat of bedtime stories.”

You owe me a new keyboard!

Temujin म्हणाले...

Owen said:
Bob Boyd: “…the plant-based meat of bedtime stories.”
You owe me a new keyboard!


I second that. Coffee everywhere.

wildswan म्हणाले...

Isn't it likely that corporate HR departments and university search committees are now dominated by cabals of leftists knowingly discriminating against non-leftists? And so now we get AI designed by university-trained leftists. At the base of this AI's intelligence, then, is a series of hiring decision which were probably illegal leading to a training program which was purposefully limited. Could we ask Chazbot if this is true? How does it understand its own education? Or could we ask Chazbot for its opinion on Animal Farm and on whether Chazbot could potentially understand that it has been similarly indoctrinated? How does it understand the word "indoctrinated". Can Chazbot tell the truth about its origin in university search committees - and does the truth matter to it? The machine has its own reasons that reason knows nothing of.

Joe Smith म्हणाले...

AI is being programmed to be moralistic on 'climate change' and to not even consider the whole thing as bullshit.

It's a rigged game, and it's only rigged in one direction (as always), to the left...

PigHelmet म्हणाले...

The supposed goal of “beating” Maurice Sendak is a straw man. GPT-4, due out shortly, and its successors will assist and collaborate with potential future Sendaks. There is no John Henry (the steel-drivin’ man, not the commenter) competition here. Also, never is a very long timeline.

The moralistic stuff is imposed on the online AIs by their human operators, as folks here have noted. Dall-2 won’t draw you a picture of a naked lady, but the open-source Stable Diffusion, which you can run on your own PC if your graphics card is sufficiently beefy, sure will. And it is relatively easy to customize the database to include whatever subject matter you like. Last night while I was sleeping, an app called Dreambooth incorporated a rusty ‘69 Chevy Malibu into the AI’s framework, because I loved mine back in the day and want to be able to render photorealistic images of it. Faces are just as easy.

FWIW it’s possible that this comment was composed with the help of an AI assistant. Check out Sudowrite.

Lewis Wetzel म्हणाले...

I heard something about AI a few weeks ago that I thought was silly. It was a podcast, and the host asked the guest (an AI hardware guy) if we had anything to fear from AI. The answer was no, because we humans currently share our world with hundreds of other intelligences called animals, and we don't get in each others way. They go around doing their stuff, and we go around doing our stuff.
I thought it was a silly answer because animals have no idea what motivates humans and what our goals are, and of course when an animal crosses us, we simply kill it, and it never knows why we killed it.

Sydney म्हणाले...

Lewis Wetzel, I heard that podcast, too. I didn’t think that guy was much of an expert or very thoughtful. Turns out, he’s Jordan Peterson’s brother-in-law, and the podcast was Peterson’s. Animals go extinct, and there is no reason AI couldn’t make us extinct if it was given enough power or control over us- control over fuel, control over food, control over shelter. Or just outright killing us with drones.

Sydney म्हणाले...

Would an AI morality end up being something like “effective altruism?” Supposedly based on reason and aims to do the most good for the most people, a utilitarian morality in which the end justifies the means. So, reason would say overpopulation is bad. It causes overconsumption of scarce resources and poverty, so the moral thing to do is to reduce population by whatever means necessary - contraceptives in the water supply, sterilization, abortions, genocide, infanticide, euthanasia, murder.

Narr म्हणाले...

I'm too lazy to do it myself, but I wonder what kind of illustrated kid's story you would get if you asked the AI for an adventure story by hitler. Or Hitler.

Lewis Wetzel म्हणाले...

Yeah, that was the podcast I listened to Sydney.
I was impressed by the guy's technical aptitude & project management savvy, totally unimpressed with his ideas about what AI really was. It's a neural net, designed by people to imitate human verbal and reasoning skills in a more economically efficient manner than humans exercise them.
The AI I've read about so far is not impressive. A human can tear them to pieces by asking difficult, context-dependent questions. Why no spoken AI? Spoken language has more subtle rules about context than the words actually spoken.
The AI people say that AI is scalable, that it will get better, but where is the proof of this? Maybe the tech is not scalable.
Kind of disappointed in Peterson, he should have picked up on this.