Any day now white folks will be talking about cultural bias in testing
And they'll be right.
Making the Grade
Tuesday, August 10, 2004; Page A18
"BE A CONFORMIST." So advises one leading test prep company on how to beat the analytical writing assessment for the graduate business school entrance exam, or GMAT. Lacking originality might land any high school or college student a C-plus, but in the new world of the computer-graded essay, conformity will win the top prize. As The Post's Jay Mathews reported, the GMAT has pioneered the use of computer programs to grade essays in high-stakes standardized testing, and it is being closely watched by other makers of standardized tests. The computer program works by comparing a submitted essay to a database of other already-scored essays on the same topic. The more similar it is to a high-scored essay on the same topic, the better the score.
…But using computers to help grade the test merely underscores the idea that creativity and content are irrelevant, as shown when craftily written nonsense essays earned top marks as part of a study. Scoring high, then, becomes more about hewing to some statistically generated model essay whose cookie-cutter structure can be easily analyzed by a computer. [P6: emphasis added] And with some colleges already using the program to help make placement decisions for writing classes, and some schools using it to give students feedback on their essays, a question arises: Does an essay's value come from hewing strictly to a formula for writing it?
The computer program recommends that a conclusion contain at least three sentences. Why, if two will do?