The Perils of Measuring Performance, Inside and Outside Academia
The omnipresence of smiley evaluations, rankings, scores, key performance indicators and school grades hardly surprise us anymore. We take their presence and value for granted. But these quantified measures of performance come at a cost, I argue in a recent commentary in Business & Society.
Take student evaluations of teaching (SETs), used in some universities to measure teaching quality. In SETs, students rate various aspects of a course, often on a scale from 1 (very bad) to 5 (very good). The resulting scores are frequently used for staff evaluation or promotion purposes. And that is a problem.
Why?
Research found that SETs do relate to professor gender, class size, and course level: aspects that are often not in the ‘circle of influence’ of professors.
This led psychologist Ian Neath already some 25 years ago to write a rather cynical paper titled “How to improve your teaching evaluations without improving your teaching,” including ‘tips’ such as be male, teach small classes and do not teach required courses.
In addition, the underlying assumption that SETs form a reliable proxy for those aspects of teaching that matter – such as student learning – simply does not hold, according to a recent meta-analysis.
Why are such quantified ‘performance indicators’ still used if this is the case? Likely because they help to simplify the complex classroom reality. Quantification can reformulate something as complex and multidimensional as teaching into a one-dimensional score. And such a score gives the possessor a sense of control and understanding. But, given the implications of quantification, this is an illusion. And that illusion comes at a cost.
I distinguish personal, organizational and societal ‘costs’ of performance measurement. For instance, when measurement systems like SETs are used to reward, judge or evaluate people, this can lead to stress and alienation at the personal level. Also, it can trigger ‘indicatorism’, i.e., behavior aimed at improving (or better: manipulating) an indicator while losing track of the original goal. For organizations this is costly, as it diminishes the informativeness of such performance measures. At the societal level, performance indicators may give the false impression that there are simple trade-offs that can be made. For instance, when looking at environmental ‘performance’ quantified as kilograms of carbon emissions, it suggests that a flight can be easily offset by planting some trees. But reality is more complex than such numbers suggest.
I do not suggest to abandon performance measurement altogether, but we need to have a more thorough discussion about its limitations and potential side effects. Also, the way how we use quantified performance indicators matters.
When indicators are used to trigger dialogues about what is important, or to learn what might be improved, this can yield favorable effects.
So, not the fact that performance measures simplify reality is a problem. We all use maps every day that are useful to us precisely because they simplify (imagine having to use a detailed map of scale 1:1 to navigate…). What we need to address is our understanding and use of ‘performance’ measures such as SETs. We should not be naïve about their limitations, ideally complement quantified measures with richer qualitative information and use them to initiate dialogues about what matters. Because not everything that matters can be measured, and not everything that can be measured matters.