Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
The movement towards more responsible forms of research assessment has been consistent in calling out the unreliable and unhealthy overreliance on narrowly focused publication metrics as a proxy for research and researcher quality. The negative impacts of the misuse of journal impact factors, h-indices, and other citation metrics on equity, diversity and inclusion; on the take-up of open research practices; on mental health and burnout; on research integrity; and the scholarly record —including issues such as paper mills, questionable research practices, monolingualism — have all been well documented.
The Coalition for Advancing Research Assessment (CoARA) is the latest evolution of initiatives seeking to broaden out our definition of research ‘excellence’ and to assess it appropriately. To this end, its second core commitment is to “Base research assessment primarily on qualitative evaluation for which peer review is central, supported by responsible use of quantitative indicators.”
Concerns about Commitment 2
Unfortunately, this commitment seems to have become a sticking point for parts of the scientometric community. In 2023, some scholars from the University of Granada in Spain accused both the Declaration on Research Assessment (DORA) and CoARA of “bibliometric denialism.” This was challenged robustly by both CWTS Leiden scholar Alex Rushforth and by DORA and CoARA themselves. However, earlier this year, the President of the International Society for Scientometrics and Informetrics (ISSI) shared similar concerns, which he felt ultimately rendered CoARA “unsound.”
The concerns seem to be founded on fear and misunderstanding: a fear that the role of scientometrics and scientometricians is under threat; and a misunderstanding of CoARA’s position on the value of quantitative indicators.
The purpose of this piece is to clarify CoARA’s position on the use of quantitative indicators and by so doing, it is hoped, reassure the scientometric community.
The role of quantitative indicators in research assessment
As previously stated, there is no shortage of evidence on the damage being done to the scholarly community by an over-reliance on evaluative bibliometrics. If inter-disciplinary evaluation challenges and the publish-or-perish-fueled shortages in evaluative labor are to be fixed only by numerical indicators, complex research activities, agendas, achievements are reduced to numbers, rather than actual engagement with research outputs. This does not serve research well and does not ensure wise investment of R&D resources.
In addition, such evaluative bibliometrics are largely generated using proprietary databases such as Scopus and Web of Science: databases and systems that are both opaque and out of the hands of research communities. Only peer evaluation is fully community-controlled and independent in this respect.
However, whilst peer review is recognized as the gold standard for many forms of research assessment, that is not to say there are no problems with it. Indeed, the evidence base for challenges with peer review – its quality, accuracy, replicability, efficiency, equity, transparency, inclusion, and participation rates – is building steadily, and this is acknowledged in the CoARA Agreement, which references the need to “address the biases and imperfections to which any method is prone.”
The problem is that many read the first clause of Commitment 2 that states that peer review is central and fail to read the second clause saying that peer review should be supported by the responsible use of indicators. Indeed, the need to balance the roles of the quantitative and qualitative in our assessments is critical to the future of responsible research assessment. This is a key focus of the CoARA Metrics Working Group and the Academic Career Assessment Working Group which has already identified that 70 percent of universities are looking to rely on balanced use of qualitative assessment and metrics. It is also one of the core considerations of the SCOPE framework for Responsible Research Assessment that is part of the CoARA Toolbox.
A nice visualization of the appropriate place for quantitative indicators across various levels of assessments can be found in the Norwegian Career Assessment Matrix:
Using scientometrics alone for assessments at lower levels of granularity, i.e., for the assessment of individuals, including consequential purposes such as allocating rewards (funding, jobs), is highly problematic. In such cases, peer review should be preferred. (Of course, the other key consideration here is discipline, and the effect of different disciplinary publication practices, and therefore evaluation impacts, will be well-known to scientometricians). However, the use of scientometrics at higher levels of aggregation, such as country or university level, and for less consequential forms of assessment such as for scholarly understanding, is far less problematic (if still imperfect). Clearly, using peer review alone for these forms of assessment would not be successful – and not something CoARA advocates.
However, whatever research assessment methods are used, whether they involve scientometrics or not, CoARA Commitment 2 identifies a role for qualitative, expert assessment. Most scientometricians would wholeheartedly agree with this. They themselves provide both quantitative assessments alongside the qualitative expertise that interprets them. It’s very rare for a quantitative assessment to stand alone.
Even so, the fact remains that an over-reliance on even responsible scientometrics can still have a negative impact on the research evaluation ecosystem due to trickle-down effects. The legitimate use of bibliometrics to understand country-level activity can soon end up illegitimately in promotion criteria if too much reward is associated with bibliometric assessments at higher levels of aggregation (for example global university rankings). This was recognised by principle 9 of the Leiden Manifesto for the responsible use of bibliometrics which was written by scientometricians. This called on evaluators to “Recognize the systemic effects of assessment and indicators [because] indicators change the system through the incentives they establish”. Ultimately, the reason the CoARA Commitments are so strongly worded around the importance of the centrality of peer review, is in response to the guidance originally provided by the scientometric community itself.
Breaking the impasse
It’s important that as we seek to move towards research assessment reform we do so together, not allowing minor points of difference to become large bones of contention. An important facet of CoARA’s implementation is the facilitation of mutual learning and exchange on evaluation practices where qualitative and quantitative approaches are meaningfully combined. In this context, we will gladly discuss any scientometrician’s concern with Commitment 2, but it is important to remember that this is one of ten commitments. All are important and designed to be taken together. As no issues seem to have been raised with the other nine commitments, we can hopefully assume that these are accepted by the scientometric community as a valid way forward.
The truth is that the research assessment reform movement needs scientometricians and scientometricians need research assessment reforms. Such reforms can benefit from the expertise of scientometricians as we seek to identify the rightful role of metrics in reformed assessments. Indeed, we are already starting to see something of a shift in this direction from those scientometric scholars previously accusing CoARA of bibliometric denialism who are now turning their attention to developing approaches for ‘narrative bibliometrics.’
Equally, scientometric scholars, like many others, will ultimately benefit from assessment reforms as all their contributions are brought within the purview of recognition and reward regimes, and more fairly, equitably and robustly assessed.
Our best chance of success is to pull together and not to pull in different directions. We hope what we have laid out here provides some clarifications and reassurance to the scientometric community and is just one of many conversations going forward as we pool our expertise and reform research assessment together.
Excellent discussion on balancing qualitative and quantitative measures in research assessment. The emphasis on responsible use of bibliometrics alongside peer review is much needed to address systemic biases and ensure fair evaluations. Tools like the Norwegian Career Assessment Matrix provide practical insights—could CoARA consider creating similar frameworks tailored for interdisciplinary research?