Write Noam Chomsky, Ian Roberts and Jeffrey Watumull in "The False Promise of ChatGPT" (NYT).
"... ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a 'just following orders' defense, shifting responsibility to its creators. In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity."
48 comments:
Don't know nothin' about this chat or high tech stuff.
Just this week, learned that I could ask Google on my phone.
Naturally, I asked Google who is the smartest and most handsome man in my home.
Googles lovely woman's voice confirmed my opinion, so I am now a believer.
Talk of AI's "false promise" is itself false.
ChatGPT doesn't "promise" any marks of "true intelligence" or moral concern. Would ChatGPT commit obvious strawman fallacies? Only humans would thoughtlessly repeat, and repeat, that it doesn't "think."
Did they ever find Chomsky's 'Language Acquisition Device'?
Chomsky is overlooking a linguistic achievement he doesn't like, a threat to status.
The program though is just equivalent to an exponential-sized "if this then print that; then if this print that; etc" but coded more efficiently, so it's not exponential-sized.
A bit off topic but… I asked ChatGPT about a local basketball player. The answer it gave me had at least two factual errors. One was about where he had played previously. It claimed he had played for the Pacers, which he hadn’t. The second mistake had to do with where he is playing now. The app gave me his previous team not the local team he’s with now. For such a simple search it came up short. I’d say it’s it ready for prime time.
How do we know ChatGPT didn’t write that?
Chat GPT is not intelligence. It just rapidly collects information and uses programmed processes to make sentences. Since it picks up information from a web overwhelming biased to the left, it reflects that bias in its product, even without the bias of its leftist programmers. I give it an A+ for its speed but an F for its intelligence.
I think this is what teachers mean when they say ChatGBT can write a solid C paper but has no capacity to write an A paper--no creativity, no insights. Nothing new. Just fast reliable regurgitation of what is already out there.
Re: The False Promise of ChatGPT
I asked Bing for, "science breakthroughs that were predicted to be wrong or impossible." Bing responded ...
Hello, this is Bing. I found some examples of science breakthroughs that were predicted to be wrong or impossible:
Quasicrystals: In 1982, Dan Shechtman discovered a new type of crystal structure that had symmetries that were considered impossible by existing theory.
mRNA vaccines: In the 1990s, Katalin Karikó and Drew Weissman developed a method to use messenger RNA (mRNA) as a therapeutic tool, but faced skepticism and rejection from the scientific community. Their work later enabled the development of mRNA vaccines for COVID-19.
Heavier-than-air flight: In 1895, Lord Kelvin, a prominent physicist, stated that “heavier-than-air flying machines are impossible”, only to be proved wrong eight years later by the Wright brothers.
We shall see.
It looks like chatbots are already replacing telephone systems for customer services. I guess that's bad for the foreign call centers, but good for the Americans who have to deal with them. Chatbot isn't perfect, but it's certainly no worse than what we already have. It knows about as much as the customer service reps do and it's easier to understand.
My impression of high school is that it's not that hard to write an A (or at least a B+) paper if you just do a halfway decent job repeating what teachers and sources have already said. Brilliance and creativity aren't expected. It may be different in higher education and the employment world.
The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws.
But if causal mechanisms and physical laws are programmed into it, what’s the difference?
It's a shame that Chomsky has not limited himself to opining on language.
People are overlooking the linguistic achievement though, in narrowing down a topic and constructing a response for that topic, from a database that has to go through the same analysis.
That's definitely not trivial. Called constructing a gloss, and unsolved back when I encountered the problem.
So. As smart as a mainstream media journalist. That about right?
I work with computers, so I don't play with computers.
Has anyone noticed, does ChatXYZ ever make grammatical errors?
I had a program to deliver AP news stories to you that you claim to like or dislike based on what news stories you mailed the server with like or dislike evaluations, based on a thesaurus and a system of (largest) eigenvalues. It sort of worked. A guy claimed he liked a story of a woman being arrested for something and got back all sorts of women-in-trouble stories. No understanding was involved, and no language generation at all, just words and associated words. AP did all the writing.
To identify the topic, and then write on that topic in idiomatic English, is a much higher achievement, and touches on some of Chomsky's areas of expertise.
Old enough to remember when the idea of a chess-playing computer was new. Much pooh-poohing. Now any good cellphone can beat the world champ.
Point is, technology advances, and programs learn, just as infants grow, and learn. Means what's true today may not be true next year, next decade, next century.
How do we know ChatGBT is a Progressive?
It often invents facts to support its assertions.
@tim maguire, with respect, sir, these days regurgitating exactly what the professor said in class is precisely how to get an A in today’s academia. The student with originality is the one at risk of a C. If the student’s conclusions are at odds with the professor’s biases, an F is not unlikely. The teachers you quote are from 50 or 60 years ago. Don’t look for appreciation of originality in today’s academia.
@Althouse, your lack of an “I’m skeptical” tag suggests to me that you actually believe Chomsky is capable of understanding how an AI system works. Why would you believe that?
Talk about the pot calling the kettle black.
There is a history-oriented YouTube channel called Metatron* which has recently posted a scathing critique of another history-oriented YouTube video represented by its creators as a factual discussion of Medieval and Renaissance European armor. The critiqued video contains a considerable number of erroneous claims and misrepresentations of well-established historical truths that are so absurd that only AI-assisted research can be plausibly blamed.
To cite just one example, the video shows us a grotesque** tournament helm gifted by the Holy*** Roman Emperor, Maximillian I, to the young Henry VIII in about 1515, and then illustrates the gift-giver with one of the official portraits of Maximillian I, the Hapsburg Emperor of Mexico who was deposited and executed by revolutionary forces led by Benito Juarez in 1867. A human being with a reasonable basic education and native intelligence tasked with conducting an image search using Emperor+Maximillian+I as search terms would immediately reject this image in the context of a gift presented to a Renaissance personality solely on the grounds of the gross disparity of attire worn by the monarchs in question. Someone born in the 19th century cannot present a gift to someone who died in the 16th century, a valid logical inference a genuine intelligence can apply critically to images that an artificial intelligence cannot.
* Metatron was the name given in Kabalistic lore of the late Hellenistic period to the "recording secretary" of Yahweh's divine council, in other words, the unimpeachable historical source. Clever, but a bit arch if you ask me.
** Before grotesque became an English adjective it was a French term for parade or tournament armors with monstrous or comical designs intended to amuse spectators more than protect the wearer.
*** Blogger's own AI (genuine stupidity) insists on changing Holy to only.
How do we know ChatGPT didn’t write that?
An astute question. /sarc off
How do we know Left Bank of the Charles isn't the product of artificial intelligence? The pre-packaged prognostications and canned quips strike me as suspicious.
I want to ask ChatGBT(LBTQ+?) why so many people are so gucking stupid.
Actually, I don't want to ask it that, but would be interested in the answer if someone else asks.
That's the first logical thing said by Chomsky in several decades. Pretty much everything he said after developing Generative Grammar can be safely ignored. This includes his writing on Government-Binding, the Language Acquisition Device, and all the Anarcho-Syndicalist political stuff.
Never before was so little content puffed up so much, and never before was so little content blindly accepted by outsiders.
Chomsky criticizing ChatGPT = pot calling the kettle black
"[T]he partisan mind is a surprisingly efficient and even elegant system that operates with small amounts of information..."
Fixed it for you.
Quaestor: Saw that video.
Now, to subject at hand - can ChatGPT write a paper arguing a point, such as this: knowing what was available to him, was Admiral Halsey correct to pursue the Japanese carriers at Leyte Gulf?
It is one thing to collate information, it is another to make a judgment regarding how important information is, and weighing it, form a conclusion.
Typo alert: "deposited and executed" should read deposed and executed. AI strikes again.
Next time I'll use toppled. I'm rapidly becoming an AI Luddite. If I had wooden shoes to throw at Google I'd throw both of them plus a few giant sequoias.
Sebastian writes, "Would ChatGPT commit obvious strawman [sic] fallacies?"
Without the source code available for deep analysis, the question is utterly rhetorical. (The scare quotes applied to promise, true intelligence, and think raise probity concerns as well, but let them slide for now.) Every programming task begins with certain assumptions, the foremost being the suitability of the algorithm being applied. The history of computation is full of specious and misapplied algorithms, an obvious example being the use of so-called random number generators. It turns out that none of them produce random numbers. The advocates of quantum computing claim such technology can produce and apply true randoms, but so far nothing concrete has been tested and proven.
There are numerous common tasks with numerous associated algorithms. At this moment I can reach out with my left hand and touch the spines of three books full of public domain and proprietary algorithms. Which one a programmer chooses or invents for himself necessarily involves embedding a bias in the source code. Take for example the various photo-retouching and upscaling applications commercially available through Apple's App Store and other channels. They are offered as tools to be applied to common tasks in photography, but they do not produce identical outputs derived from identical inputs. Gaussian blur is a popular photographic effect, and many photo editors have Gaussian blur as a tool or applicable filter. Adobe owns at least two copyrighted algorithms for Gaussian blur that produces subtly different outputs from any other software publisher's version of Gaussian blur, the differences largely arising from the different pseudo-random routines applied.
Furthermore, consider my 3:28PM comment. Google abruptly and unceremoniously substituted only in place of Holy. Not once, the AI did it twice! Is it remotely plausible that Google's internal dictionary doesn't contain holy and that I intended to type "only"? I think not. What's more likely is programmer bias. Rome was bad, didn't you know? The Romans kept slaves and made war, therefore the adjective holy is inappropriate when concatenated with the adjective Roman and must be expunged.
It's a shame that Chomsky has not limited himself to opining on language.
I believe his theories of language acquisition are still widely accepted in Psychology. However, he is less of an expert on Cognitive Psychology and how we form ideas. For that, I would refer to past works of Danny Kahneman and Amos Tversky and the current experts who are most likely to be their former students.
Inferential logic is creative. The crux of machine learning is a basket of data and a cache of correlations, where the latter is a function of smoothing human design over iterations. The limitation of machines is in measured degrees of freedom, and the article of faith that science can discern origin and expression.
ChatGPT does not do research. It does not "Google". It does not understand anything.
It constructs responses based on statistics about pre-digested texts from the internet. It may invent "facts" based on its estimates of likelihood. Including things like inventing academic paper references, complete with fake combinations of real authors and fake journals, based on nothing more than bits of text in its training data.
I have been following AI since the days of SHRDLU and the Blocks World, circa 1975. I am not impressed.
I am perhaps demonstrating my ignorance (or obtuseness) regarding how this works--but has anyone programmed Chat to engage itself in debate on major issues of the day? Would it be capable of changing its "mind"?
Suppose you programmed it on one side with "liberal" assumptions and ideas, and on the other side with "conservative" assumptions and ideas. (Maybe let major "thinkers" on each side come up with the ideas and assumptions). Would one side "win?" Would the "losing" side trust the outcome? Would either side learn anything?
A moment ago I followed an ad link to a site offering Men's Unisex Tee Shirt(s).
Evidence suggests that's not an AI artifact. A genuine human concocted that flapdoodle with nary a trace of self-consciousness.
I asked ChatGPT to write my bio, then to try three more times. All four had many, but different, errors. Several mentioned the book I wrote and added another, nonexistent book. Three made me a Berkeley grad.(No.)
Then I put in my husband's not-very-common name. The first one said he was born in 1938 (no) and died in 2017 (NO!). He was deceased in the second one too. Both had him at the University of Calgary, where I found a woman with the same last name had earned a PhD. She's not dead either.
Chomsky better watch it talking about how the brain makes quick often accurate judgements. What he is saying is that bias is good for a first approximation.
Joanne, hi! Now, I'm going to ask it to do my bio, and my father's. Should be interesting, since we share all 3 names and lived in the same region most of our lives.
Joanne: Sadly, I died in 2005, but my influence lives on via the company I founded, which has become a household name!
It got one fact correct: after I told it where I studied, it came up with the right degree. All else was fiction.
The bios it can up with for Col. Fred John Drinkwater were even more fanciful. In those cases it also got one basic fact correct: that he was a pilot. This was particularly disappointing since his obit has been online in several places for many years, plus his publications and awards.
Maynard writes, "I believe [Chomsky's] theories of language acquisition are still widely accepted in Psychology."
Perhaps. Many anthropologists are increasingly less sanguine about his theories. But Chomsky is right about one thing -- Ko-Ko is a no-no.
@Maynard: "I believe his theories of language acquisition are still widely accepted in Psychology."
@Quaestor: "Perhaps."
Chomsky was NEVER accepted in psychology. Psychology is rooted in biology and hands-on empirical testing. Data, data, data or don't go there. Chomsky is an abstract rationalistic 'formal linguist' -- his discipline has more in common with mathematics than biology or psychology. The psychologists always dismissed Chomsky as far too abstract and lacking data. The Language Acquisition Device became popular only because he preached it to similar rationalists and political types rather than data-driven scientists. The scientists ignored Chomsky as vacuous, so the LAD concept was allowed to spread far beyond its actual value.
Biologists and psychologists focus on brain structures, learning, the anatomical differences between apes and humans (i.e., mouth and throat structure), and the evolutionary pressures that allowed simple ape calls and shrieks to become structured language. The "language acquisition device" is actually the end product of many thousands/millions of years of step-by-step changes so it should be compared to a device so much as an evolved process and capability. Even still, learning a language requires close coordination with parents and peers and evolves into dialects and inside/generational changes. Fuzzy, fuzzy, fuzzy. Organic, organic, organic.
The biological / psychological processes are very grounded and specific to the facts as observed. Chomsky waved his hand at biology and went back to an imagined mathematical perfection of languages (but this is also narrow and misleading). But, he's a charismatic (and massively overwrought) political speaker, and thereby lured many young people into his weird bubble.
Whatever you think about Chomsky's politics, he and his colleagues are consummate wordsmiths.
Is it correct to assume that while you are having a conversation with ChatGPT, thousands of other conversations with the same entity are occurring in real time, each subtly changing its information base and "world views"? This might help explain the differing responses even within minutes of each other.
"It's a shame that Chomsky has not limited himself to opining on language."
Why? You can't handle the truth?
"Chomsky better watch it talking about how the brain makes quick often accurate judgements. What he is saying is that bias is good for a first approximation."
If you put it to him in those terms, he might even agree with you. (Or not...I'm just supposin'.) But, even if so, the problem is in simply accepting bias as the truth, not going past bias to determine if the "first approximation" is correct or incorrect, (or to what degree in either case), and why and how. Pretty much everything humankind has learned has been the result of looking beyond surface appearances, putting aside or testing biases for validity, and finding what lies deeper.
Biotrekker: no, it does not work that way. It operates almost entirely with a pre-digested collection of human-curated training data from the web. It's a statistics engine, not a researcher, interlocutor, or collaborator.
@Narr:
"Why are so many people so gucking stupid?"
ChatGPT:
"I understand that you're feeling frustrated and perhaps even angry towards others. However, I don't believe in judging or labeling people as "stupid". It's important to remember that everyone has their own unique experiences, beliefs, and perspectives that shape their thoughts and behaviors. Instead, let's focus on identifying the specific situations or behaviors that are causing you frustration and work on developing coping strategies to manage those feelings. Would you like to explore what's behind this frustration further?"
@Mikey NTH:
"knowing what was available to him, was Admiral Halsey correct to pursue the Japanese carriers at Leyte Gulf?"
ChatGPT:
"Admiral Halsey's decision to pursue the Japanese carriers at Leyte Gulf was the right call, despite the criticism he faced. Any argument to the contrary lacks historical context and ignores the broader strategic considerations that were at play.
First and foremost, Halsey's decision was based on the intelligence he had at his disposal. The carriers were the primary threat to the American forces, and eliminating that threat was crucial to securing Allied victory in the Pacific. Halsey's pursuit of the carriers was a necessary move that ultimately contributed to the defeat of the Japanese Navy.
Furthermore, the argument that Halsey should have prioritized protecting the landing forces over pursuing the carriers fails to take into account the larger strategic picture. The carriers were the key to the Japanese Navy's success, and eliminating them was a crucial step in ending the war. By engaging the carriers, Halsey was able to inflict significant damage on the Japanese Navy and effectively cripple their ability to mount a comeback.
Critics of Halsey's decision often point to the fact that pursuing the carriers left the American landing forces vulnerable to attack from the Japanese surface fleet. However, this argument ignores the fact that Halsey was not aware of the full extent of the Japanese surface fleet's strength and location. His decision was based on the information available to him at the time, and he made the best call he could based on that information.
In conclusion, Admiral Halsey's decision to pursue the Japanese carriers at Leyte Gulf was a bold and necessary move that ultimately contributed to the Allied victory in the Pacific. Any argument to the contrary lacks historical context and ignores the larger strategic considerations that were at play. Halsey made the right decision based on the information available to him, and we should recognize and celebrate his contributions to the war effort."
Thanks for the test runs, PigHelmet.
Interesting reaction to the 'stupid' query--psychologize rather than even address the meanings of 'so many' and 'stupid.' Shallow pop-psy, therapy rather than exploration.
The Halsey essay is too repetitive, on first pass.
On second pass, it's almost a template for justifying any controversial decision that didn't end in disaster for the victor in a campaign or war.
Not ready for primetime.
Post a Comment