Higher Education Reform

How Research Credibility Suffers in a Quantified Society

January 8, 2025 8250

To address research credibility issues, we must reform the role of metrics, rankings, and incentives in universities.

Academia is in a credibility crisis. A record-breaking 10,000 scientific papers were retracted in 2023 because of scientific misconduct, and academic journals are overwhelmed by AI-generated images, data, and texts. To understand the roots of this problem, we must look at the role of metrics in evaluating the academic performance of individuals and institutions.

To gauge research quality, we count papers, citations, and calculate impact factors. The higher the scores, the better. Academic performance is often expressed in numbers. Why? Quantification reduces complexity, makes academia manageable, allows easy comparisons among scholars and institutions, and provides administrators with a feeling of grip on reality. Besides, numbers seem objective and fair, which is why we use them to allocate status, tenure, attention, and funding to those who score well on these indicators.

The Quantified Society – How our obsession with performance measurement shapes the world we live in by Berend van der Kolk (2024). Business contact, Amsterdam. Link: www.quantifiedsociety.com

The result of this? Quantity is often valued over quality. In The Quantified Society I coin the term “indicatorism”: a blind focus on enhancing indicators in spreadsheets, while losing sight of what really matters. It seems we’re sometimes busier with “scoring” and “producing” than with “understanding”.

Indicatorism

As a result, some started gaming the system. The rector of one of the world’s oldest universities, for one, set up citation cartels to boost his citation scores, while others reportedly buy(!) bogus citations. Even top-ranked institutions seem to play the indicator game by submitting false data to improve their position on university rankings!

While abandoning metrics and rankings in academia altogether is too drastic, we must critically rethink their current hegemony. As a researcher of metrics, I acknowledge metrics can be used for good, i.e., to facilitate accountability, motivate, or obtain feedback and improve. Yet, when metrics are not used to obtain feedback but instead become targets, they cease to be good measures of performance, as Goodhart’s law dictates. The costs of using the metrics this way probably outweigh the benefits.

Rather than using metrics as the sole truth when it comes to assessing academic performance, we should put them in perspective. We could do this by complementing quantitative metrics with qualitative information. Narratives, discussions of assumptions, and explanations can give back much-needed context to interpret metrics. Read a job candidate’s working paper instead of counting her publications in journals. Metrics can be great conversation starters, but should not replace our understanding of what (a) good research(er) is.

Nobel laureates

If we don’t change our use of metrics, research quality itself may suffer. Peter Higgs, the Nobel laureate who passed away last year, warned in an interview: “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” The pressure to produce and perform in the short term can come at the expense of scientific progress in the long term. A more critical stance towards metrics and rankings is essential if we want to enhance the quality and credibility of research.

Dr. Berend van der Kolk is an associate professor at the Vrije Universiteit Amsterdam and author of the book The Quantified Society. He taught courses on performance measurement at universities in the UK, Spain and The Netherlands.

View all posts by Berend van der Kolk

Related Articles

Jack Vettriano (1951-2025) and the Art of Alienation
Insights
March 14, 2025

Jack Vettriano (1951-2025) and the Art of Alienation

Read Now
Jessica Horn on the African Feminist Praxis
Insights
March 11, 2025

Jessica Horn on the African Feminist Praxis

Read Now
What Can We Learn From The Women Of The Iron Age? 
Bookshelf
March 5, 2025

What Can We Learn From The Women Of The Iron Age? 

Read Now
Does Trump’s ‘Common Sense’ Equal a War on Social Science?
Insights
March 4, 2025

Does Trump’s ‘Common Sense’ Equal a War on Social Science?

Read Now
‘Nobody Knows Anything’: A Look at the Field of Oscarology

‘Nobody Knows Anything’: A Look at the Field of Oscarology

When Oscar-winning screenwriter William Goldman was asked whether it was possible to predict a hit film, he responded with three words that […]

Read Now
The Mystery of Xenotransplantation for Social Sciences

The Mystery of Xenotransplantation for Social Sciences

Xenotransplantation is a fascinating subject – for obvious reasons, as it involves transplanting organs or other body parts across species boundaries (in […]

Read Now
Hongwei Bao on Queering the Asian Diaspora

Hongwei Bao on Queering the Asian Diaspora

In his new book, Queering the Asian Diaspora: East and Southeast Asian Sexuality, Identity and Cultural Politics, the University of Nottingham’s Hongwei […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest


This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Jan Bouwens

We measure firm profit to get an estimate of the value the firm created. Subpar performance leads some managers to manipulate the profit metric. However, that fact does not disqualify profit as a measure of value creation. Some is true for counting publications in journals led by excellent researchers. That said, we do need to take measures that decrease the number of incidences that people manipulate reported publication numbers. That is, universities/the academic society should be much more critical on their “superstars.” Let me take the Dutch prof Stapel who fell of his pedestal in 2O11. He published in top… Read more »

Wladimir Jimenez Alonso

The main issue facing the current university system is that the real “customers” of universities are not the students but the bureaucrats who control the flow of public funding. As the saying goes, “he who pays, calls the shots.” In this system, incentives are geared toward meeting the demands of indicators and metrics imposed by government administrations and funding agencies, rather than focusing on the actual needs of students or society. In some cases—common in Southern European and Latin American countries—not even government bureaucrats are the true customers, as tenured positions are often granted immediately, removing accountability. This has led… Read more »