SAT graders are told to read an essay just once and spend two to three minutes per essay, and Dr. Perelman is now adept at rapid-fire SAT grading. This reporter held up a sample essay far enough away so it could not be read, and he was still able to guess the correct grade by its bulk and shape. "That's a 4," he said. "It looks like a 4."
४ मे, २००५
"I would advise writing as long as possible, and include lots of facts, even if they're made up."
Taking the new SAT essay test.
याची सदस्यत्व घ्या:
टिप्पणी पोस्ट करा (Atom)
१५ टिप्पण्या:
This fact, by the way, is also true for law professors who grade law school exams. It was well known that a person who was able to write faster and put more supporting arguments did better on an exam than a person who wrote slower, even if the additional arguments were bogus. The more arguments the better. Perhaps some professors who took the time to grade their exams wouldn't allow this, but professors are people too/ Thus, the general rule was that you got graded for quantity more than quality. I suppose you can't blame the law professors too much as they have to come up with some system to grade hundreds of exams. Thus, giving points for each argument is one way to avoid having to thoroughlly read each argument.
It is also well known that one astronomical grade in the first year (say a 90 in Property) can change your whole law school career because as every law student knows, the first year matters most.
However, having graduated 10 years ago, I also know that a good law school GPA only helps you get your first job, not necessarily your second. Further, the only people who in the end really care about how well you did in law school are those who also did well, which tends to be a smaller percentage of people in the working world.
Sloanasaurus: I happen to have a law school exam that just started, and I take your criticism very seriously. Ideally, we should switch to computers with exam software and then impose word limits. I'd impose a space limit now, but with handwritten exams, it would only make people write tiny. Often people who write long exams really are making more points and going into more detail, so they deserve better grades. But if someone is cranking out irrelevant material and being verbose and repetition, I certainly notice. I does not make a favorable impression! There are exams where you can go page after page without finding anything that you can give credit for.
Actually, there are Supreme Court cases I feel that way about!
It's the old scale method for grading then? Excellent.
Maybe we should go to the stair step method where we toss SAT essays down stairs labeled 1, 2, 3, 4, 5, 6 and late fate decide.
Sorry for double posting and for writing "I does not make a favorable impression!" Looks like I'm speaking in some ridiculous dialect! Should be "it" not "I."
How curious. These same SAT-takers were taught in Kindergarten.. all the way back then.. that spelling and content don't matter. Just getting your "thoughts," whatever those are(!), on paper was more important than anything else. This new essay exam is the logical extension of that "feeling" philosophy of education.
I am not really griping about the exam problem, it is a fact of life about grading and testing. Most exams have time limits and most law students use all of the time. If you can write 50% faster than your neighbor, you can put down 50% more than your neigbor and significantly increase your chances of getting a higher grade, especially if your professor grades your exam during a "lazy" period. Everyone has lazy periods.....
Students can help themselves by practicing writing more quickly. I was a slow writer and suffered early on so I practiced writing more quickly. It helped me increase my grades even though my knowledge of the subjects was the same. Its similar to preparing for the BAR exam, where finishing the exam on time is one of the the most important factors in passing.
I fully agree that standardized tests need to be taken on the computer. I took the AP world history test just yesterday, and I'm certain I would've gotten a better score on the writting section if I were able to do it on a computer. The reason for this is that they demand you do it in pen, which means that if you have any sort of corrections or additions you want to make after writing something, you need to START ALL OVER, so it's safer to just go on with what you have, than to try and rewrite a completely new essay in the time you have remaining.
It also told us not to double space, for some crazy reason.
When I was a physics TA, some kid told me he got a 1530 on the SAT, and I thought, "Damn, he doesn't seem that smart." Then I found out that the SAT had been changed since I was in high school. These days, you get 1400 points just for using the right end of the pencil.
No wonder people think Bush is stupid because he didn't break 1300. Everyone breaks 1300 now.
"Bzzt", Lee? I know you're dyslexic, but that isn't a word no matter how you rearrange it. You need some vowels in there. Maybe it's Polish.
Lee, perhaps you are unaware of it, but brain-snob societies like Mensa agree with me about the SAT. They use standardized test scores to estimate IQ, and they have different standards for tests before and after the dumbing-down. A pre-dumbing-down 730 is now an 800, and any moron could get a 730 on the old test.
One big problem with the direction of the SAT is that more and more doing well is dependant upon how well trained you are, and not how much actual aptitude you might have.
Thus, my eighth grade daughter at an expensive private school has been writing constantly since she was in maybe 2nd grade. It shows. Technically, she is a much better writing than I, despite three graduate degrees.
Indeed, last year they spent a lot of time writing, then rewriting, then rewriting, etc. By the end, you wouldn't think that these were the work of 7th graders. Much better polished than the writing of most high school seniors.
They start SAT prep in the 9th or 10th grade, but have been taking standardized multiple guess tests yearly since kindergarden.
The result is that these kids may not be any smarter than those in the public schools, but are far, far, better prepared for taking SAT tests.
The problem here is that SAT tests were designed originally to identify those kids with innate aptitude, as opposed to having attended top schools. The trend is in precisely the opposite direction these days - the better the school your kid attends, the better his SATs are going to be, and, in the end, the more elite the colleges he ends up attending.
The SAT represented quoted at the end of the news story I read had I what thought was a perfectly credible reason for finding the MIT study fatally flawed. The MIT was a random sample only if what the SAT published was a random sample, and the SAT is saying they didn't sample randomly--they chose not to pick pithy essays as exemplars. Putting it at the end suggests to me the reporter isn't on top of the methodology and so the story's slant in favor of finding bias in grading may be wholly undeserved. Based on the details presented in the story, which are all I know, the question of bias remains totally unresolved in my mind.
Let me try that again less pithily and more proof-readily:
The SAT representative who was quoted at the very end of the news story I read about the grading study offered I what considered to be a perfectly credible grounds for regarding the MIT study as fatally flawed and unable to draw a conclusion of a grading bias in favor of longer essays. The MIT sample of graded essays was random only if what the SAT published on its Web site was random; but according to the quoted representative, the SAT inadvertantly sampled non-randomly--that is, they chose not to pick pithy essays for posting as exemplars on their Web site. Putting this rebuttal at the end suggests to me the reporter didn't recognise it as serious and so just isn't in command of the science in this story, which says to me that the story's slant in favor of a finding of bias in SAT grading is likely to be wholly undeserved. Based on the details presented, which are all I know, the question of bias remains totally unresolved in my mind.
Incidentally, I find it credible that in selecting representative illustrations of "good essays" for the SAT Website they chose long ones. I suspect the argumentative structure of longer ones tends to be more obvious. If they'd chosen exemplars for the Web site by shouting into the back room "Hey Joe, pass me ten of the essays from the top of the '5' pile" then we might expect the examples of top scores to be random for length, but if they weren't selected like that then we have no idea what the criteria were.
टिप्पणी पोस्ट करा