May 4, 2005

"I would advise writing as long as possible, and include lots of facts, even if they're made up."

Taking the new SAT essay test.
SAT graders are told to read an essay just once and spend two to three minutes per essay, and Dr. Perelman is now adept at rapid-fire SAT grading. This reporter held up a sample essay far enough away so it could not be read, and he was still able to guess the correct grade by its bulk and shape. "That's a 4," he said. "It looks like a 4."

21 comments:

Sloanasaurus said...

This fact, by the way, is also true for law professors who grade law school exams. It was well known that a person who was able to write faster and put more supporting arguments did better on an exam than a person who wrote slower, even if the additional arguments were bogus. The more arguments the better. Perhaps some professors who took the time to grade their exams wouldn't allow this, but professors are people too/ Thus, the general rule was that you got graded for quantity more than quality. I suppose you can't blame the law professors too much as they have to come up with some system to grade hundreds of exams. Thus, giving points for each argument is one way to avoid having to thoroughlly read each argument.

It is also well known that one astronomical grade in the first year (say a 90 in Property) can change your whole law school career because as every law student knows, the first year matters most.

However, having graduated 10 years ago, I also know that a good law school GPA only helps you get your first job, not necessarily your second. Further, the only people who in the end really care about how well you did in law school are those who also did well, which tends to be a smaller percentage of people in the working world.

Ann Althouse said...

Sloanasaurus: I happen to have a law school exam that just started, and I take your criticism very seriously. Ideally, we should switch to computers with exam software and then impose word limits. I'd impose a space limit now, but with handwritten exams, it would only make people write tiny. Often people who write long exams really are making more points and going into more detail, so they deserve better grades. But if someone is cranking out irrelevant material and being verbose and repetition, I certainly notice. I does not make a favorable impression! There are exams where you can go page after page without finding anything that you can give credit for.

Actually, there are Supreme Court cases I feel that way about!

Ann Althouse said...
This comment has been removed by a blog administrator.
Nick said...

It's the old scale method for grading then? Excellent.

Maybe we should go to the stair step method where we toss SAT essays down stairs labeled 1, 2, 3, 4, 5, 6 and late fate decide.

Ann Althouse said...

Sorry for double posting and for writing "I does not make a favorable impression!" Looks like I'm speaking in some ridiculous dialect! Should be "it" not "I."

Dave said...

Two observations:

(1) It's interesting that MIT has an undergraduate writing program. Most engineers/scientist types I have encountered in the workplace have been terrible writers, and I often thought that they simply did not learn how to write well in college. It's refereshing to read (in the Times article) that the guy doing this analysis teaches writing to budding engineers/scientists (then maybe I'll understand how to use something by reading its manual!)

(2) Getting the facts right and writing well are two different beasts. Conceivably, you can inject baseless allegations and facts into a paper and still be writing clearly, concisely, and grammatically-correct. If the essays are being graded on the basis of the clarity of their prose--and that sounds like the goal--then it should not matter whether the facts are correct.

The real question, it seems, is this: do people who write longer essays muddy their prose up with overwrought sentence constructions, multisyllabic words, and other obscurantist methods? Or, do they merely have a lot to say, and say it plainly and simply?

k said...

How curious. These same SAT-takers were taught in Kindergarten.. all the way back then.. that spelling and content don't matter. Just getting your "thoughts," whatever those are(!), on paper was more important than anything else. This new essay exam is the logical extension of that "feeling" philosophy of education.

Sloanasaurus said...

I am not really griping about the exam problem, it is a fact of life about grading and testing. Most exams have time limits and most law students use all of the time. If you can write 50% faster than your neighbor, you can put down 50% more than your neigbor and significantly increase your chances of getting a higher grade, especially if your professor grades your exam during a "lazy" period. Everyone has lazy periods.....

Students can help themselves by practicing writing more quickly. I was a slow writer and suffered early on so I practiced writing more quickly. It helped me increase my grades even though my knowledge of the subjects was the same. Its similar to preparing for the BAR exam, where finishing the exam on time is one of the the most important factors in passing.

Richard Fagin said...

"I would advise writing as long as possible, and include lots of facts, even if they're made up."

There, fellow readers, is the journalistic ethics problem in a nutshell.

Drethelin said...

I fully agree that standardized tests need to be taken on the computer. I took the AP world history test just yesterday, and I'm certain I would've gotten a better score on the writting section if I were able to do it on a computer. The reason for this is that they demand you do it in pen, which means that if you have any sort of corrections or additions you want to make after writing something, you need to START ALL OVER, so it's safer to just go on with what you have, than to try and rewrite a completely new essay in the time you have remaining.

leeontheroad said...

Drethelin, it's been a while since I took an extemporaneous writing test. Question: can't you just begin writing on every other line, so that when you revise/edit, you can cross out and write above it? Very low tech, but effective.

Drethelin said...

It also told us not to double space, for some crazy reason.

Tara said...

If you grow up writing on computers, then getting everything out first, and then going back, editing, fixing spelling and grammar, adding, taking away, organizing, etc, is an excellent way to write.

So is thinking things through, making an outline, and putting it all down in one reasonably complete swoop.

But they are two different methods for writing, and it's tough to go back to the second once you've adapted for the first.

Steve H. said...

When I was a physics TA, some kid told me he got a 1530 on the SAT, and I thought, "Damn, he doesn't seem that smart." Then I found out that the SAT had been changed since I was in high school. These days, you get 1400 points just for using the right end of the pencil.

No wonder people think Bush is stupid because he didn't break 1300. Everyone breaks 1300 now.

leeontheroad said...

Bzzt, Steve. The scoring was re-centered, and the scores were *always* scaled, anyway. A 500 verbal or Math score *always* was the median. That the scoring scale has been re-centered does not mean that a 90th percentile score doesn't compare 1970 to 2000. Thought you physics wizzes knew stats.

Anyway, second problem-- it's major, and it's common. The Scholastic Assessment Test (changed from S APTITUDE to S Assessment) only actually ever claimed to be a predictor of students' academic success in their first collegiate year-- absent intervention, unusual life event or Maturity Deficit Syndrome beyond the first standard deviation. (Yes, I made up that last part, but I think you know what I mean.)

Sure, many folks with high SAT's are "smart," but many aren't particularly, and, too, many folks with lower SAT scores are, in fact, smart. The SAT doesn't measure intelligence. It measures basically two things: 1) how much you know about algebra, geometry, English vocabulary, grammar, and, now, shaping an essay); and 2) how to take a timed, largely multiple choice test.

By the way, I scored 1290 decades ago. I still don't know exactly why anyone would want to fire a rocket from the center of the earth (Physics), and even other Scrabble players don't care that I know what mellifluous means. Oh, and I'm dyslexic. So tell me, from my score, am I smart? Answer carefully.

Steve H. said...

"Bzzt", Lee? I know you're dyslexic, but that isn't a word no matter how you rearrange it. You need some vowels in there. Maybe it's Polish.

Lee, perhaps you are unaware of it, but brain-snob societies like Mensa agree with me about the SAT. They use standardized test scores to estimate IQ, and they have different standards for tests before and after the dumbing-down. A pre-dumbing-down 730 is now an 800, and any moron could get a 730 on the old test.

leeontheroad said...

ege, estev, that's cleaver fo oyu.

Bruce Hayden said...

One big problem with the direction of the SAT is that more and more doing well is dependant upon how well trained you are, and not how much actual aptitude you might have.

Thus, my eighth grade daughter at an expensive private school has been writing constantly since she was in maybe 2nd grade. It shows. Technically, she is a much better writing than I, despite three graduate degrees.

Indeed, last year they spent a lot of time writing, then rewriting, then rewriting, etc. By the end, you wouldn't think that these were the work of 7th graders. Much better polished than the writing of most high school seniors.

They start SAT prep in the 9th or 10th grade, but have been taking standardized multiple guess tests yearly since kindergarden.

The result is that these kids may not be any smarter than those in the public schools, but are far, far, better prepared for taking SAT tests.

The problem here is that SAT tests were designed originally to identify those kids with innate aptitude, as opposed to having attended top schools. The trend is in precisely the opposite direction these days - the better the school your kid attends, the better his SATs are going to be, and, in the end, the more elite the colleges he ends up attending.

Murky Thoughts said...

The SAT represented quoted at the end of the news story I read had I what thought was a perfectly credible reason for finding the MIT study fatally flawed. The MIT was a random sample only if what the SAT published was a random sample, and the SAT is saying they didn't sample randomly--they chose not to pick pithy essays as exemplars. Putting it at the end suggests to me the reporter isn't on top of the methodology and so the story's slant in favor of finding bias in grading may be wholly undeserved. Based on the details presented in the story, which are all I know, the question of bias remains totally unresolved in my mind.

Murky Thoughts said...

Let me try that again less pithily and more proof-readily:

The SAT representative who was quoted at the very end of the news story I read about the grading study offered I what considered to be a perfectly credible grounds for regarding the MIT study as fatally flawed and unable to draw a conclusion of a grading bias in favor of longer essays. The MIT sample of graded essays was random only if what the SAT published on its Web site was random; but according to the quoted representative, the SAT inadvertantly sampled non-randomly--that is, they chose not to pick pithy essays for posting as exemplars on their Web site. Putting this rebuttal at the end suggests to me the reporter didn't recognise it as serious and so just isn't in command of the science in this story, which says to me that the story's slant in favor of a finding of bias in SAT grading is likely to be wholly undeserved. Based on the details presented, which are all I know, the question of bias remains totally unresolved in my mind.

Murky Thoughts said...

Incidentally, I find it credible that in selecting representative illustrations of "good essays" for the SAT Website they chose long ones. I suspect the argumentative structure of longer ones tends to be more obvious. If they'd chosen exemplars for the Web site by shouting into the back room "Hey Joe, pass me ten of the essays from the top of the '5' pile" then we might expect the examples of top scores to be random for length, but if they weren't selected like that then we have no idea what the criteria were.