The summer before I started teaching at SMU in 1984, the law school sent me to the workshop for new teachers that is sponsored each year by our professional association, the Association of American Law Schools (AALS). During that two-day boot camp, the principal speaker made a big pitch for the use of multiple-choice exams, which he said “can test for anything an essay exam can test for, just more reliably and consistently.” This was a pretty wild thought, considering that I had taken exactly one multiple-choice test during my law-school career (1974-77). Law-school exams, as in most of the liberal arts, were supposed to test not just what the student knew, but how well the student could apply what she knew. Essay questions seemed perfect for this purpose. On the other hand, machine-graded multiple-choice questions held out the promise of giving me back the month of December, a time when one of my former colleagues said he was tempted to call the Provost and offer to give back his salary . . . or demand that it be doubled. And, I rationalized, machine-graded exams will at least produce consistent results, thus curing a problem that is a constant concern when reading 80-100 blue-books over the course of several weeks.
It took a few years for me to take the speaker’s advice and try a multiple-choice final exam. I’ve worked through a number of variations on the theme, such as allowing students to explain their answers (or critique the question), cluing me into problems that may make a particular answer less creditworthy than I originally thought. The other advantage of this variation was that it gave students a chance to avoid the all-or-nothing nature of a machine-graded question. Even if the student had chosen the wrong answer (or the answer I wasn’t looking for), a cogent explanation showing an understanding of underlying rules and a mostly-excellent analysis could get the student partial or even full credit for her answer.
Experience with these types of variations has led to my more or less consistent use of a multiple-choice variant in all of my classes: the multiple-choice question with short-answer explanation. It is constructed exactly like any other multiple-choice question (factual set-up (“root”), prompt (“stem”), and 4-5 answers (“options”)) with the addition of one single word: “Explain.” My exam instructions and the constraints imposed by the clock produce explanations of 300-500 words — enough to allow me gauge the students knowledge of the law and ability to apply that knowledge to a fact pattern. What, you might ask, does the multiple-choice part of the question add? It focuses the student on the issues I want to have addressed, keeps the student from wandering off into an unrelated area of doctrine, and forces the student to consider an argument, issue, or doctrine that she might not otherwise have considered.
Granted, in the old days, issue-spotting, not bringing in unrelated doctrine, and considering all of the potentially meritorious counterarguments to your position were exactly the things essay questions expected student to do. By carefully constructing the questions and options, I can do that with my exams, as well, but in a more controlled environment, which I think makes for a fairer test. It also corrects for the all-or-nothing nature of Scantron forms. It does not, alas, give me back my December.