Any day now white folks will be talking about cultural bias in testing

And they'll be right.



Making the Grade
Tuesday, August 10, 2004; Page A18

"BE A CONFORMIST." So advises one leading test prep company on how to beat the analytical writing assessment for the graduate business school entrance exam, or GMAT. Lacking originality might land any high school or college student a C-plus, but in the new world of the computer-graded essay, conformity will win the top prize. As The Post's Jay Mathews reported, the GMAT has pioneered the use of computer programs to grade essays in high-stakes standardized testing, and it is being closely watched by other makers of standardized tests. The computer program works by comparing a submitted essay to a database of other already-scored essays on the same topic. The more similar it is to a high-scored essay on the same topic, the better the score.

…But using computers to help grade the test merely underscores the idea that creativity and content are irrelevant, as shown when craftily written nonsense essays earned top marks as part of a study. Scoring high, then, becomes more about hewing to some statistically generated model essay whose cookie-cutter structure can be easily analyzed by a computer. [P6: emphasis added] And with some colleges already using the program to help make placement decisions for writing classes, and some schools using it to give students feedback on their essays, a question arises: Does an essay's value come from hewing strictly to a formula for writing it?

The computer program recommends that a conclusion contain at least three sentences. Why, if two will do?

Trackback URL for this post:

http://www.prometheus6.org/trackback/5920
Posted by Prometheus 6 on August 10, 2004 - 10:29am :: News
 
 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

From a psychometric position, the more you refine a test to remove " cultural bias" the more the test becomes a measurement of g - the factor commonly regarded as I.Q. The test tilts toward *pattern recognition* ( craftily written nonsense in the correct pattern for example) and problem solving instead of knowledge - which is fine, unless you need to test for knowledge comptency and learned skill sets.

G- loaded testing favors the bright person who for whatever reason, missed out on standard educational opportunities. Knowledge based testing favors those who, despite being less bright than say someone whose IQ is 130 +, were engaged in their education and are plugged into concepts the mainstream deems relevant. They will actually do *better* on culturally biased tests, even if they are not from the mainstream culture. Again fine - unless what you are testing for is the ability to handle problem solving and analysis of new data.

There is no one size fits all test which is something the public and politicians - and unfortunately many educators - seem incapable of grasping. In fact, the only intellectually defensible measurement of a person's ability is a comprehensive battery of different tests properly administered - usually on a 1:1 proctor-student ratio.

And even this, in my opinion, is only a reliable rough guide snapshot. As we come to understand the implications of brain neuroplasticity in the next 10-15 years we should be radically changing our assumptions about teaching and learning to fit the research.

Posted by  mark safranski (not verified) on August 10, 2004 - 11:08am.

As we come to understand the implications of brain neuroplasticity in the next 10-15 years we should be radically changing our assumptions about teaching and learning to fit the research.

If that's our goal. I suspect the goal above is cost containment.

Posted by  P6 (not verified) on August 10, 2004 - 12:51pm.