Impact

On Measuring What We Value 

July 7, 2022 2357
coil of measuring tape

In their response to Ziyad Marar’s thought piece “On Measuring Social Science Impact” from Organizational Studies, the HumetricsHSS team (see members below) argue that we need to step back to determine what we are measuring – and why.  

We are living through an intense period of change in higher education as the nature, scope, diversity, and breadth of scholarly communication and practice expand beyond the traditional boundaries of “what counts as scholarship.” New technologies and a deeper understanding of the meaningful connections that shape academic work present academia with unprecedented opportunities to reconsider our efforts to recognize and celebrate academic excellence. We on the HuMetricsHSS team – a group engaged in rethinking institutional practices of scholarly assessment – believe excellence must ultimately be rooted in a sophisticated understanding of the values that animate our academic lives. Our ability in the academy to come to some shared understanding of these values depends on intentional efforts to align our values with our scholarly practices to create more just and meaningful relationships. 

Graphic on Perspectives on Social Science Impact
Click above to see other pieces of this series as they arrive and to read an excerpt from the essay “On Measuring Social Science Impact.”

When we refer to quality, impact, excellence, and relevance as important measures of scholarship, we fail to adequately recognize that these values themselves are shaped and determined by the degree they are put into practice through the scholarship we undertake. More often than not, our efforts to measure these values rely on proxy indicators that, while they may serve as convenient filters, at best distort the effects we seek to recognize, and at worst conceal those aspects of scholarship colleges and universities say they most want to cultivate.  

In his article, Marar discusses the need to filter knowledge claims to identify relevance and excellence. Given the slippery status of truth in our current political climate, this need, we’d argue, extends beyond academe. Such filtering requires a hefty dose of intentionality, one that should be rooted in our institutional and professional values. What do we actually mean when we say that something is “relevant” or “excellent”? Relevant to whom? Excellent according to whose definition and standards? If an institution prides itself on the public impact of its research, is a longitudinal study of healthcare disparities more excellent when it is published in a journal with a high impact factor, or when it receives many citations, or when it was conducted with attention to the privacy, interests, and centrality of the people whose health it discusses?  

The currency of academic filtration at present (peer review, tenure requirements, hiring committees) tends to favor unquestioned — and often undefined — concepts of relevance and excellence that are inward-facing and self-replicating, rather than expansive, inclusive, and epistemologically diverse. This currency also pays out for an incredibly narrow scope of work that comprises only a fraction of the whole of academic scholarly labor, an austerity that rewards writing but not reviewing and research but not program leadership, that gives lip service to diversity but undermines mentorship, collaboration, and community engagement. 

The coupled challenge of valuing too much what we measure and not measuring what we truly value are outlined in Marar’s references to both Goodhart’s law (“when a measure becomes a target, it ceases to be a good measure”) and Cameron’s statement (“not everything that can be counted counts, and not everything that counts can be counted’”). In a study we recently published that includes over 120 interviews with faculty, administrators, staff, and librarians across the Big Ten Academic Alliance, a consortium of research universities in the United States, the interviewees identified both of these problems: that metrics of various sorts overwhelmingly incentivize only certain activities across the university, while the work that the institutions claim to value — work engaged with the public, work that centers diversity, equity, and inclusion, and so on — and the work that the scholars themselves value rarely counts. 

If traditional filters of prestige are themselves steeped in a set of tacit values that may no longer adequately respect the modes of labor (or the laborers themselves), then when better to step back for a moment to ask what we are counting — and why?


*The HuMetricsHSS team employs a collective authorship model. To identify all authors contributing under the collective model in each of our publications, our aim is to select a different way of listing co-authorship in each piece we write. This time the order of authorship has been randomly assigned by drawing names:  

Christopher P. Long [0000-0001-9932-5689] 

Nicky Agate [0000-0001-7624-3779] 

Bonnie Russell [0000-0002-0374-0384] 

Jason Rhody [0000-0002-7096-1881] 

Penelope Weber [0000-0002-4542-8989] 

Bonnie Thornton Dill [0000-0002-7450-2412] 

Rebecca Kennison [0000-0002-1401-9808] 

Simone Sacchi [0000-0002-6635-7059] 

HuMetricsHSS is an initiative that creates and supports values-enacted frameworks for understanding and evaluating all aspects of the scholarly life well-lived and for promoting the nurturing of these values in scholarly practice. Comprised of individuals working in academic and nonprofit academic–adjacent sectors, the HuMetricsHSS team is committed to establishing humane indicators of excellence in academia, focused particularly on the humanities and social sciences.

View all posts by HuMetricsHSS Team

Related Articles

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate
Impact
September 18, 2024

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Read Now
Webinar: Fundamentals of Research Impact

Webinar: Fundamentals of Research Impact

Whether you’re in a research leadership position, working in research development, or a researcher embarking on their project, creating a culture of […]

Read Now
Paper Opening Science to the New Statistics Proves Its Import a Decade Later

Paper Opening Science to the New Statistics Proves Its Import a Decade Later

An article in the journal Psychological Science, “The New Statistics: Why and How” by La Trobe University’s Geoff Cumming, has proved remarkably popular in the years since and is the third-most cited paper published in a Sage journal in 2013.

Read Now
A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

The idea of an autonomous vehicle – i.e., a self-driving car – isn’t particularly new. Leonardo da Vinci had some ideas he […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Jenny

Well written blog, explicating what really matters in scholarly research, teaching and community service cannot be adequately measured by scholarly metrics, but instead by the difference it makes to the lives of people outside academia.