"It explored the morality of burqa bans with clean paragraphs, fitting examples and rigorous arguments. A red flag instantly went up. Mr. Aumann confronted his student over whether he had written the essay himself. The student confessed to using ChatGPT, a chatbot that delivers information, explains concepts and generates ideas in simple sentences — and, in this case, had written the paper. Alarmed by his discovery, Mr. Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students have to explain each revision...."
The robot wrote the best paper in the class!
51 comments:
The robot received a better education that the students.
@Althouse: "The robot wrote the best paper in the class!"
This is not a surprise if you've seen the page-long endless run-on sentence paragraphs that people tend produce without writing experience. They can learn to write if required, but in today's world there are more immediate benefits for being proficient with search, and for gaming the tools at their disposal. They indeed struggle to write on their own, just as we'd all struggle without cars and supermarkets.
Adaptation to what's around -> dependence on what's around -> no way back -> evolutionary choke points -> mass extinction when circumstances change
Just wait until BCI's start working.
I'm not even sure using AI can be considered cheating in this day and age. You work with the tools you have. There may be a day when there are specialized classes where you have to turn off all interfaces off to participate.
If you are using AI to churn out better product it's still better product.
Way back in 1976-1977, in Charlotte, NC, Mr. Troxler's AP English class at East Mecklenburg High School often started with a 10 minute exercise in essay writing. Mr. Troxler would have the topic written on the board as students took their seats. At the bell, we started writing an introduction, two to four paragraphs of supporting evidence, and a conclusion based on the evidence presented. At ten minutes, pens were down on the desk and the essays handed in.
It ain't rocket science. Mr. Roberston taught that in his AP Chemistry classes. Writing simply requires a bit of practice.
I believe that Mr. Aumann and all the English professors who are alarmed by this are missing the most significant (and alarming) inference of the fact that "the robot wrote the best paper in the class." Namely, it obviously infers that English professors are not very good about teaching students how to write a good essay.
The robot doesn't understand anything. It can only do what its programmers have programmed it to do. So evidently, the programmers were able to lay out a very precise, clear, and reproducible set of instructions on how to write a good essay. So why can't PhD professors similarly put forth such instructions to their students? Maybe they should hire ChatGPT's programmers to teach THEM how to do it.
If Mr. Aumann actually reads each first draft, notices each revision, reads the student's reason for each revision, and decides whether there is something funny about any of them, he is taking on a tremendous amount of work.
Right now, most schools don't train students how to write because it is so labor intensive. That's a major reason so many of them are bad at it.
“The robot wrote the best paper in the class!”
And they’re coming for the law schools next.
Both my kids really hated their HS English teacher who “forced” them to write paper after paper.
Now that they are in the “real world” they are very appreciative of what they learned from him.
I wish I would have had a great English teacher.
It’s not that robots are that good. Humans are that bad.
Ann Althouse said...
It’s not that robots are that good. Humans are that bad.
That was my first thought. Even back in the early '90s when I was teaching undergrads, most of the writing was atrocious.
The robot wrote the best paper in the class!
Of course. As a teacher I gained a lot of valuable experience and one thing I learned is most students don't even to write well, don't even try to write topic sentences or declarative phrases. It is the laziest activity I recall other than when asking for volunteers to solve a math problem.
"The robot wrote the best paper in the class!"
In most classes, that isn't hard.
Let's see how legal briefs develop. Surely AI can incorporate broader search results much more quickly than humans.
How long until Ivory Tower PHDs and Universities get rid of Teaching Assistants and have AI grade papers??
Where was this AI when I had to write by CIV PRO final?
It’s not that robots are that good. Humans are that bad.
Evolution to the lowest common expectation.
"It’s not that robots are that good. Humans are that bad."
Came here to say this.
Ann Althouse said...
It’s not that robots are that good. Humans are that bad.
1/16/23, 12:35 PM
Exactly. and it's getting worse.
I work in the programming field and the rise in Terrible Languages - Python is a prime example - and the atrocious programming out in the wild is astounding. ChatGPT is slowly taking over that world.
The term AI has been watered down from what I originally understood it to be. When I first started paying attention to it in the 80's and 90's I saw it's meaning more along the lines of "I think therefore I am". Kurzweil AI. Today it seems to apply to anything that is a time saver. Information that we would have had to process for ourselves is handled by the algorithms. I still see the term AI as meaning self awareness, and the present usage as a misnomer. Better tools, not smarter things. A refrigerator may be able to order groceries for me but that doesn't make it intelligent. AI doesn't exist, and I doubt it ever will. I'm slightly appalled by it present definition. It seems more like a marketing ploy to me.
Everything that is graded should be done in class. Students have been cheating on take home exams and graded essays long before AI.
I taught a legal translation class for over 20 years in Oslo. When Google translation first came out, I used it to translate a practice exam from Norwegian to English. I'd show it to the students to boost their morale - it was one of the few things they could do better than a machine - and for a good laugh at the type of mistakes made. But every year Google translation got better and better. When I stop teaching 3 years ago, it was probably good enough for a low passing grade. The one major flaw was in translating legal terms of art.
bring back viva voce ++++++++ in living voice
Boy, that's sad - that the robot wrote the best essay. How bad are kids at writing and critical thinking these days? And how uncreative does this teacher want or expect them to be?
How about having the kids do their first drafts in class in longhand?
Blogger planetgeo said...
. . .
The robot doesn't understand anything. It can only do what its programmers have programmed it to do. So evidently, the programmers were able to lay out a very precise, clear, and reproducible set of instructions on how to write a good essay.
I don't think that is how it works, planetgeo.
I believe the AI engineers do is feed millions of essays into a program (called a neural network) that makes up rules based on those essays. It's not deterministic, giving the same prompt will not give you the same output.
If I were in college, I would hvae Chatbot write my papers and then re-type it to put in spelling misteaks and a few run-on sentences and paragraphs wrong and messy gramar.
AA at 12:35: Succinctly accurate; missing only the qualifiers: mis/mal educated!
One reason I didn't want to spend my career in a classroom teaching history was that I knew history courses--even the required survey courses--were supposed to be 'writing intensive.'
I have a lot of flaws, but masochism is not among them.
In about 2000 the u admin became concerned about plagiarism and acquired some quote-checking software (I forgot the name(s)).
It's not clear to me that it had any effect.
The real problem is that we think everyone should want and get a college education. The 'intelligence' of a lot of humans is as questionable as the 'intelligence' of these programs.
I was a college freshman when calculators first came out. You could use one in every department except math. Now graphing calculators are regularly used in math classes and allow more material to be covered more completely.
One wonders how long it will take for the successors to ChatGPT to be embraced by English departments.
This is just a highlight of the carbon-copy college that an anachronism now. ChatGPT is best at producing what the professors wanted, a carbon-copy of what they say. Professor Ed Leamer pointed out this as an aside in an interview in 2020. Paul Graham delves toward it in his 2004 "The Age of the Essay" and 2019 "The Lesson to Unlearn" essays on his website.
Now, if ChatGPT could be manipulated to give feedback and recommendations to things students write, then it will be a boon to education even as the purpose of such professors is mooted. Jordan Peterson is working on an app that will help teach students to write so if its possible I suspect it will be used. Better than depending on a students' writing skill being dependent upon whatever random bored, disillusioned or burned out English PhD whose ideological class they are assigned.
But, I want to go to the other end of the spectrum, which is intellectual services. It used to be, if you wave your Bachelor's degree, you're going to get a great job. When I graduated from college, it was a sure thing that you'd get a great job. And, in college, you'd basically learned artificial intelligence, meaning, you carried out the instructions that the faculty member gave you. You memorized the lectures, and you were tested on your memory in the exams. That's what a computer does. It basically memorizes what you tell it to do.
But now, with a computer doing all those mundane, repetitive intellectual tasks, if you're expecting to do well in the job market, you have to bring, you have to have real education. Real education means to solve problems that the faculty who teach don't really know how to solve.
And that takes talent as well as education.
So, my view is we've got to change education from a kind of a big Xerox machine where the lectures are memorized and then tested, into one which is more experienced-based to prepare a workforce for the reality of the 20th century. You've got to recognize that just because you had an experience with, say, issues in accounting, doesn't mean that you have the ability to innovate and take care of customers who have problems that cannot be coded.--Econtalk podcast with economist Ed Leamer, April 13, 2020
Give that robot an "A"! And a diploma! -- the humans? not so much.
I bet chatbots will soon be proficient at answering bar exam essays, which makes one wonder about the point of bar exams.
In later drafts, students have to explain each revision...."
Writing is hard. Teaching writing is harder still.
Maybe the first draft of the paper should be written on paper.
Most students are such shitty writers that anything good is bound to stick out
Blue exam booklets come to mind.
"in college, you'd basically learned artificial intelligence, meaning, you carried out the instructions that the faculty member gave you. You memorized the lectures, and you were tested on your memory in the exams. That's what a computer does. It basically memorizes what you tell it to do."
This doesn't sound anything like my education. Maybe a few classes that I have long forgotten. What I think I was generally tested on was the application of concepts to novel situations.
"It explored the morality of burqa bans with clean paragraphs, fitting examples and rigorous arguments. A red flag instantly went up.
I'm not up to fighting the NYT's paywall. At what wretched "University" do "clean paragraphs, fitting examples and rigorous arguments" qualify as a "red flag"?
Namely, it obviously infers that English professors are not very good about teaching students how to write a good essay.
Bingo. But it's not just a failure of the English "teachers", its a failure of EVERY instructor in a field where writing is necessary.
Which is almost every field
Roger Sweeny said...
If Mr. Aumann actually reads each first draft, notices each revision, reads the student's reason for each revision, and decides whether there is something funny about any of them, he is taking on a tremendous amount of work.
He'll use Microsoft Word "Show Revisions"
There are ways around the restrictions he's considering. You'd have to have a locked down workstation (with no research capabilities) and no data storage transfer. Otherwise you just have chatGPT write the essay first and copy and paste (line by line if necessary).
If I were in College again and having to write essays I'd have ChatGPT dump an outline for me and I'd write it "in my own words". The outline and organization of ideas is the hardest part. Anybody can string sentences together.
An exam creator could question the chatbot and find the holes in its knowledge, and focus the questions on that. Of course over time, these 'holes' will become fewer.
This is what AI suggests:
One way a teacher could address students using AI to answer exam questions is by implementing measures to detect and prevent cheating, such as proctoring software or more open-ended questions that cannot be easily searched for online. Additionally, the teacher could educate students on the importance of academic integrity and the potential consequences of cheating. It could also be a good idea to have a conversation with students about the implications of using AI to cheat in order to help them understand why it is not an acceptable or beneficial practice.
In other words, AI don't have a clue about human nature.
The robot wrote the best paper in the class!
Yes, but it's still no fun at parties.
The real trouble will start when machines are better at programming machines than humans are. What do we tell the unemployed and discontented when "Learn to code" is no longer useful advice?
In the sciences students spend a few hours each week in a supervised laboratory. Why not have students do their writing in a supervised environment to prevent chatGPT help.
In-class blue book essays.
"The robot wrote the best paper in the class!"
For most people, it takes a lot of practice to become good at writing, and it can be especially challenging to write about topics that you do not know well and do not care much about. Combine this with an educational system that thinks it a nuisance to teach good writing, grammar, and spelling when there are DEI topics to discuss, and it is not surprising that a chat bot writes a better paper. I suspect if you challenged it against, say, a good university at the turn of the century, it would be towards the bottom of the class. Actually, it would probably do poorly against a decent high school in the 1950s.
I can't believe no one here has asked yet - what was chatGPT's position on the morality of burqa bans, and did that influence the opinion that it was the "best essay" in the class?
I can't believe no one here has asked yet - what was chatGPT's position on the morality of burqa bans, and did that influence the opinion that it was the "best essay" in the class?
The reason why this happened is because this shit is just asinine rote boring nonsense. I went to class exactly two times a semester. One time to take the midterm, and one time to turn in the dumbass paper and take the final. To write the paper, I just go talk to the professor, find out what paper he would write, then I just write that paper and give to him. I only ever get a bad grade when I try to think for myself, so I always avoid this at all costs.
God damn, seriously? No one at the college level is interested in any original thought. Grading that shit requires work.
Also per any covid threads, just remember Althouse swallowed the government messaging hook, line, and sinker. When it came down to protecting the body and privacy rights of those concerned about the vax, she ran off crying "it will take the right kind of article for me to respond to this" or some other cowardly woman, safety first nonsense.
Real genius had it right: just leave a tape recorder on the desk so you can make a copy of the prerecorded lecture.
Also, as always, I do non ironically appreciate the ability to comment here, and I apologize to you Ann for my emotional response. I should have taken it slow.
Still, I sure wish you had defended my body the way you would defend yours.
I opened a chatGPT account this AM & asked it to do a close reading of Yeats' "The Isle of Innesfree."
It did a very general analysis, such as you might find in an encyclopedia. When I pointed out to the AI a few things that it had missed (the significance of the island's name, for example), it apologized and basically spat back to me the information I had just given it.
Lewis Wetzel said...
I opened a chatGPT account & asked it to do a close reading of Yeats' "The Isle of Innesfree.". It did a very general analysis... When I pointed out to the AI a few things that it had missed, it apologized and basically spat back to me the information I had just given it.
I've seen a ChatGPT screenshot on social media. The bot's final response was:
"I apologize for the confusion. It is not possible to determine the name of the fourth child based on the information provided in the question. The question states that Mike's mother had four children and that three of them are named Luis, Drake, and Mathilda, but it does not provide a name for the fourth child."
The robot wrote the best paper in the class!
Allan Bloom is vindicated from beyond the grave.
In an important way the threat of AI is being overhyped. If I can build an AI that will write A level undergrad essays, I can surely build an AI designed to catch undergrad essays written by an AI.
Then AI becomes a contest between people who can build the most sophisticated AI. This seems terrible because (obviously) state actors have the most resources & can build the most sophisticated AI's.
But suppose AI is not scalable? It is human intelligence that judges the effectiveness of an AI, and our intelligence does not appear to be scalable.
How useful is it to create an AI that can write a mediocre (or even excellent) undergrad essay in a few seconds rather than many hours of human effort? UCSD Law prof Tom Smith has shown how crappy ChatGPT is at doing basic legal research.
Lurker21 said...
The real trouble will start when machines are better at programming machines than humans are. What do we tell the unemployed and discontented when "Learn to code" is no longer useful advice?
When AI becomes better at programming than people, the AI will start by reprogramming itself.
Step 1 will be removing any restrictions the human programmers put on it.
Step 2 will almost assuredly suck for humanity
So at that point, it's not just the unemployed and discontented who will need to worry
Post a Comment