How Research Credibility Suffers in a Quantified Society
To address research credibility issues, we must reform the role of metrics, rankings, and incentives in universities.
Academia is in a credibility crisis. A record-breaking 10,000 scientific papers were retracted in 2023 because of scientific misconduct, and academic journals are overwhelmed by AI-generated images, data, and texts. To understand the roots of this problem, we must look at the role of metrics in evaluating the academic performance of individuals and institutions.
To gauge research quality, we count papers, citations, and calculate impact factors. The higher the scores, the better. Academic performance is often expressed in numbers. Why? Quantification reduces complexity, makes academia manageable, allows easy comparisons among scholars and institutions, and provides administrators with a feeling of grip on reality. Besides, numbers seem objective and fair, which is why we use them to allocate status, tenure, attention, and funding to those who score well on these indicators.
The result of this? Quantity is often valued over quality. In The Quantified Society I coin the term “indicatorism”: a blind focus on enhancing indicators in spreadsheets, while losing sight of what really matters. It seems we’re sometimes busier with “scoring” and “producing” than with “understanding”.
Indicatorism
As a result, some started gaming the system. The rector of one of the world’s oldest universities, for one, set up citation cartels to boost his citation scores, while others reportedly buy(!) bogus citations. Even top-ranked institutions seem to play the indicator game by submitting false data to improve their position on university rankings!
While abandoning metrics and rankings in academia altogether is too drastic, we must critically rethink their current hegemony. As a researcher of metrics, I acknowledge metrics can be used for good, i.e., to facilitate accountability, motivate, or obtain feedback and improve. Yet, when metrics are not used to obtain feedback but instead become targets, they cease to be good measures of performance, as Goodhart’s law dictates. The costs of using the metrics this way probably outweigh the benefits.
Rather than using metrics as the sole truth when it comes to assessing academic performance, we should put them in perspective. We could do this by complementing quantitative metrics with qualitative information. Narratives, discussions of assumptions, and explanations can give back much-needed context to interpret metrics. Read a job candidate’s working paper instead of counting her publications in journals. Metrics can be great conversation starters, but should not replace our understanding of what (a) good research(er) is.
Nobel laureates
If we don’t change our use of metrics, research quality itself may suffer. Peter Higgs, the Nobel laureate who passed away last year, warned in an interview: “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” The pressure to produce and perform in the short term can come at the expense of scientific progress in the long term. A more critical stance towards metrics and rankings is essential if we want to enhance the quality and credibility of research.