First: Scantron. Now: Automated essay graders. What’s next?

edxFrom the April 4 New York Times:

Imagine taking a college exam, and, instead of handing in a blue book and getting a grade from a professor a few weeks later, clicking the “send” button when you are done and receiving a grade back instantly, your essay scored by a software program.

And then, instead of being done with that exam, imagine that the system would immediately let you rewrite the test to try to improve your grade.

EdX, the nonprofit enterprise founded by Harvard and the Massachusetts Institute of Technology to offer courses on the Internet, has just introduced such a system and will make its automated software available free on the Web to any institution that wants to use it. The software uses artificial intelligence to grade student essays and short written answers, freeing professors for other tasks.

There is, of course, a backlash:

a group of educators who last month began circulating a petition opposing automated assessment software. The group, which calls itself Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment [actually, it appears they call themselves Human Readers], has collected nearly 2,000 signatures, including some from luminaries like Noam Chomsky.

“Let’s face the realities of automatic essay scoring,” the group’s statement reads in part. “Computers cannot ‘read.’ They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.”

And, lest you think the opposition group consists of a bunch of cranky old Luddites, it includes — in addition to Prof. Chomsky — Les Perlman, “a retired director of writing and a current researcher at M.I.T.”

As much as I would like to think there is a software-based alternative to slogging through hundreds of essay answers at the end of each semester, doesn’t it seem just a little incredible?

One of PAMSSESHA’s objections is the lack of statistical evidence based upon a comparison of computer grading versus that of human graders. Until there are studies that validate this technology (preferably from independent third-parties), this strikes me as a leap too far. Indeed, although Wikipedia (I know, I know) states there are plenty of studies that show automated grading programs outperform human graders, there’s an impressive pile of research attached to the PAMSSESHA petition that negates that claim.

More, I am sure, later . . . .

About Thomas Mayo

This entry was posted in Assessment, Critical Thinking, Technology, Writing. Bookmark the permalink.