On Measuring What We Value
In their response to Ziyad Marar’s thought piece “On Measuring Social Science Impact” from Organizational Studies, the HumetricsHSS team (see members below) argue that we need to step back to determine what we are measuring – and why.
We are living through an intense period of change in higher education as the nature, scope, diversity, and breadth of scholarly communication and practice expand beyond the traditional boundaries of “what counts as scholarship.” New technologies and a deeper understanding of the meaningful connections that shape academic work present academia with unprecedented opportunities to reconsider our efforts to recognize and celebrate academic excellence. We on the HuMetricsHSS team – a group engaged in rethinking institutional practices of scholarly assessment – believe excellence must ultimately be rooted in a sophisticated understanding of the values that animate our academic lives. Our ability in the academy to come to some shared understanding of these values depends on intentional efforts to align our values with our scholarly practices to create more just and meaningful relationships.
When we refer to quality, impact, excellence, and relevance as important measures of scholarship, we fail to adequately recognize that these values themselves are shaped and determined by the degree they are put into practice through the scholarship we undertake. More often than not, our efforts to measure these values rely on proxy indicators that, while they may serve as convenient filters, at best distort the effects we seek to recognize, and at worst conceal those aspects of scholarship colleges and universities say they most want to cultivate.
In his article, Marar discusses the need to filter knowledge claims to identify relevance and excellence. Given the slippery status of truth in our current political climate, this need, we’d argue, extends beyond academe. Such filtering requires a hefty dose of intentionality, one that should be rooted in our institutional and professional values. What do we actually mean when we say that something is “relevant” or “excellent”? Relevant to whom? Excellent according to whose definition and standards? If an institution prides itself on the public impact of its research, is a longitudinal study of healthcare disparities more excellent when it is published in a journal with a high impact factor, or when it receives many citations, or when it was conducted with attention to the privacy, interests, and centrality of the people whose health it discusses?
The currency of academic filtration at present (peer review, tenure requirements, hiring committees) tends to favor unquestioned — and often undefined — concepts of relevance and excellence that are inward-facing and self-replicating, rather than expansive, inclusive, and epistemologically diverse. This currency also pays out for an incredibly narrow scope of work that comprises only a fraction of the whole of academic scholarly labor, an austerity that rewards writing but not reviewing and research but not program leadership, that gives lip service to diversity but undermines mentorship, collaboration, and community engagement.
The coupled challenge of valuing too much what we measure and not measuring what we truly value are outlined in Marar’s references to both Goodhart’s law (“when a measure becomes a target, it ceases to be a good measure”) and Cameron’s statement (“not everything that can be counted counts, and not everything that counts can be counted’”). In a study we recently published that includes over 120 interviews with faculty, administrators, staff, and librarians across the Big Ten Academic Alliance, a consortium of research universities in the United States, the interviewees identified both of these problems: that metrics of various sorts overwhelmingly incentivize only certain activities across the university, while the work that the institutions claim to value — work engaged with the public, work that centers diversity, equity, and inclusion, and so on — and the work that the scholars themselves value rarely counts.
If traditional filters of prestige are themselves steeped in a set of tacit values that may no longer adequately respect the modes of labor (or the laborers themselves), then when better to step back for a moment to ask what we are counting — and why?
*The HuMetricsHSS team employs a collective authorship model. To identify all authors contributing under the collective model in each of our publications, our aim is to select a different way of listing co-authorship in each piece we write. This time the order of authorship has been randomly assigned by drawing names:
Christopher P. Long [0000-0001-9932-5689]
Nicky Agate [0000-0001-7624-3779]
Bonnie Russell [0000-0002-0374-0384]
Jason Rhody [0000-0002-7096-1881]
Penelope Weber [0000-0002-4542-8989]
Bonnie Thornton Dill [0000-0002-7450-2412]
Rebecca Kennison [0000-0002-1401-9808]
Simone Sacchi [0000-0002-6635-7059]
Well written blog, explicating what really matters in scholarly research, teaching and community service cannot be adequately measured by scholarly metrics, but instead by the difference it makes to the lives of people outside academia.