Each semester I dread and welcome the day I receive the email with the subject line ”Teaching Evaluations”. In the email, I get access to two things, my quantitative scores and my open-ended student comments.
Like many colleagues, I love the comments – the good and the bad. I welcome the sarcastic remarks about why the assignments have to be so complicated, the compliments regarding my choice of readings, and even the somewhat odd personal observations such as, “do you know you wear black pants every Thursday.” I learn so much about what students believe to be fair and good about my course. I take most of these comments to heart, even the odd observations. In general, I feel nothing but love from the students when I read these comments, even the critical ones.
On the other hand, I dread opening the quantitative scores that always accompany the comments. As a quantitative researcher, I know how to make sense of the distributions and the possible random errors. As a heavy user of survey research, I place a lot of value on scaled responses. As a member of a large university faculty, I know that my relative overall ranking from these scores matter and that professional careers can hinge on good and bad teaching evaluations. With all the possible limitations and consequences in mind, I also take these averages and individual scores selected by my students to heart.
To be honest, my quantitative scores are rarely that low, but they are far from excellent which is what I aspire to. What I have come to accept is that only a small proportion of students will choose the highest rank on all questions asked. For example, some might believe I manage class time well others disagree. When colleagues question the utility of standardized teaching evaluations I often disagree. In my experience, my students are as intentional in their response choices using these measures as they were in writing their comments. If I am honest with myself, the majority of students are often right with respect to where they find me lacking and where they see my strengths. So I now use both my comments and my standardized scores to improve my courses.
I know many of us are concerned with the heavy reliance on standardized evaluations and the lack of attention paid to the comments portion of teaching evaluations, the peer observations, and the critical assessment of syllabi. I completely agree. I know that we worry about students using these evaluations as a tool to exact revenge for their poor performance. However, as standardized teaching evaluation questions have improved, I increasingly see value in the responses generated from these questions. Universities are increasingly paying attention to the questions asked and to the data received. Over the last two years, Dedman College examined the content of questions, revised and added new ones, and finally found a way to link class characteristics to the data. As a result, the standardized scores from my fall evaluations were some of the most helpful I have received while at SMU.
While changing standardized evaluations are certainly a step in the right direction, I would suggest that universities also find a way to use the open-ended comments as well. All they need to accomplish this is software that analyzes qualitative data to identify patterns. Using these software packages, we can quickly and reliably identify positive versus negative adjectives, identify consistent problems, generate super cool WORD CLOUDS to see what words dominate student comments. In other words, there are ways to not rely so heavily on a single number or a rank when determining how good of a teacher someone is.
I would love to hear other ideas for how we might continue to improve how we personally use our teaching evaluations and how our institutions use these evaluations (both quantitative and qualitative) to evaluate our performance.