Impact

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

September 23, 2024 1043

The movement towards more responsible forms of research assessment has been consistent in calling out the unreliable and unhealthy overreliance on narrowly focused publication metrics as a proxy for research and researcher quality. The negative impacts of the misuse of journal impact factors, h-indices, and other citation metrics on equity, diversity and inclusion; on the take-up of open research practices; on mental health and burnout; on research integrity; and the scholarly record —including issues such as paper mills, questionable research practices, monolingualism — have all been well documented.

The Coalition for Advancing Research Assessment (CoARA) is the latest evolution of initiatives seeking to broaden out our definition of research ‘excellence’ and to assess it appropriately. To this end, its second core commitment is to “Base research assessment primarily on qualitative evaluation for which peer review is central, supported by responsible use of quantitative indicators.”

LSE-impact-blog-logo
This article by Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board originally appeared on the LSE Impact of Social Sciences blog as “The role of scientometrics in the pursuit of responsible research assessment.”

Concerns about Commitment 2

Unfortunately, this commitment seems to have become a sticking point for parts of the scientometric community. In 2023, some scholars from the University of Granada in Spain accused both the Declaration on Research Assessment (DORA) and CoARA of “bibliometric denialism.” This was challenged robustly by both CWTS Leiden scholar Alex Rushforth and by DORA and CoARA themselves. However, earlier this year, the President of the International Society for Scientometrics and Informetrics (ISSI) shared similar concerns, which he felt ultimately rendered CoARA “unsound.”

The concerns seem to be founded on fear and misunderstanding: a fear that the role of scientometrics and scientometricians is under threat; and a misunderstanding of CoARA’s position on the value of quantitative indicators.

The purpose of this piece is to clarify CoARA’s position on the use of quantitative indicators and by so doing, it is hoped, reassure the scientometric community.

The role of quantitative indicators in research assessment

As previously stated, there is no shortage of evidence on the damage being done to the scholarly community by an over-reliance on evaluative bibliometrics. If inter-disciplinary evaluation challenges and the publish-or-perish-fueled shortages in evaluative labor are to be fixed only by numerical indicators, complex research activities, agendas, achievements are reduced to numbers, rather than actual engagement with research outputs. This does not serve research well and does not ensure wise investment of R&D resources.

In addition, such evaluative bibliometrics are largely generated using proprietary databases such as Scopus and Web of Science: databases and systems that are both opaque and out of the hands of research communities. Only peer evaluation is fully community-controlled and independent in this respect.

However, whilst peer review is recognized as the gold standard for many forms of research assessment, that is not to say there are no problems with it. Indeed, the evidence base for challenges with peer review – its quality, accuracy, replicability, efficiency, equity, transparency, inclusion, and participation rates – is building steadily, and this is acknowledged in the CoARA Agreement, which references the need to “address the biases and imperfections to which any method is prone.”

The problem is that many read the first clause of Commitment 2 that states that peer review is central and fail to read the second clause saying that peer review should be supported by the responsible use of indicators. Indeed, the need to balance the roles of the quantitative and qualitative in our assessments is critical to the future of responsible research assessment. This is a key focus of the CoARA Metrics Working Group and the Academic Career Assessment Working Group which has already identified that 70 percent of universities are looking to rely on balanced use of qualitative assessment and metrics. It is also one of the core considerations of the SCOPE framework for Responsible Research Assessment that is part of the CoARA Toolbox.

A nice visualization of the appropriate place for quantitative indicators across various levels of assessments can be found in the Norwegian Career Assessment Matrix:

Image shows a micro-macro split in where bibliometrics and peer review should be used. With bibliometrics more suited to the macro and peer review the micro level.
Source: NOR-CAM – A toolbox for recognition and rewards in academic careers, adapted from picture originally created by Glänzel.

Using scientometrics alone for assessments at lower levels of granularity, i.e., for the assessment of individuals, including consequential purposes such as allocating rewards (funding, jobs), is highly problematic. In such cases, peer review should be preferred. (Of course, the other key consideration here is discipline, and the effect of different disciplinary publication practices, and therefore evaluation impacts, will be well-known to scientometricians). However, the use of scientometrics at higher levels of aggregation, such as country or university level, and for less consequential forms of assessment such as for scholarly understanding, is far less problematic (if still imperfect). Clearly, using peer review alone for these forms of assessment would not be successful – and not something CoARA advocates.

However, whatever research assessment methods are used, whether they involve scientometrics or not, CoARA Commitment 2 identifies a role for qualitative, expert assessment. Most scientometricians would wholeheartedly agree with this. They themselves provide both quantitative assessments alongside the qualitative expertise that interprets them. It’s very rare for a quantitative assessment to stand alone.

Even so, the fact remains that an over-reliance on even responsible scientometrics can still have a negative impact on the research evaluation ecosystem due to trickle-down effects. The legitimate use of bibliometrics to understand country-level activity can soon end up illegitimately in promotion criteria if too much reward is associated with bibliometric assessments at higher levels of aggregation (for example global university rankings). This was recognised by principle 9 of the Leiden Manifesto for the responsible use of bibliometrics which was written by scientometricians. This called on evaluators to “Recognize the systemic effects of assessment and indicators [because] indicators change the system through the incentives they establish”. Ultimately, the reason the CoARA Commitments are so strongly worded around the importance of the centrality of peer review, is in response to the guidance originally provided by the scientometric community itself.

Breaking the impasse

It’s important that as we seek to move towards research assessment reform we do so together, not allowing minor points of difference to become large bones of contention. An important facet of CoARA’s implementation is the facilitation of mutual learning and exchange on evaluation practices where qualitative and quantitative approaches are meaningfully combined. In this context, we will gladly discuss any scientometrician’s concern with Commitment 2, but it is important to remember that this is one of ten commitments. All are important and designed to be taken together. As no issues seem to have been raised with the other nine commitments, we can hopefully assume that these are accepted by the scientometric community as a valid way forward.

The truth is that the research assessment reform movement needs scientometricians and scientometricians need research assessment reforms. Such reforms can benefit from the expertise of scientometricians as we seek to identify the rightful role of metrics in reformed assessments. Indeed, we are already starting to see something of a shift in this direction from those scientometric scholars previously accusing CoARA of bibliometric denialism who are now turning their attention to developing approaches for ‘narrative bibliometrics.’

Equally, scientometric scholars, like many others, will ultimately benefit from assessment reforms as all their contributions are brought within the purview of recognition and reward regimes, and more fairly, equitably and robustly assessed.

Our best chance of success is to pull together and not to pull in different directions. We hope what we have laid out here provides some clarifications and reassurance to the scientometric community and is just one of many conversations going forward as we pool our expertise and reform research assessment together.

Luciana Balboa (pictured) is a biomedical researcher in the area of infectious diseases at the Institute for Biomedical Research on Retroviruses and AIDS (INBIRS), affiliated with the University of Buenos Aires – CONICET in Argentina. She is a member of the CoARA Steering Board. Elizabeth (Lizzie) Gadd chairs the INORMS Research Evaluation Group and is vice chair of the Coalition on Advancing Research Assessment (CoARA). In 2022, she co-authored 'Harnessing the Metric Tide: Indicators, Infrastructures and Priorities for UK Research Assessment'. She is the head of Research Culture and Assessment at Loughborough University, UK and champions the ARMA Research Evaluation SIG. She previously founded the LIS-Bibliometrics Forum and The Bibliomagician Blog and was the recipient of the 2020 INORMS Award for Excellence in Research Management and Leadership. Eva Méndez is deputy vice-president for research policy and open science, and professor at the library and information science at the Universidad Carlos III de Madrid. She is also a member of the CoARA Steering Board. Janne Pölönen is secretary general for publication forum at the Federation of Finnish Learned Societies. Lic.Phil. in history, his recent work is in the fields of research assessment, scholarly communication, open science and learned societies. Pölönen advocates the Helsinki Initiative on Multilingualism in Scholarly Communication and coordinates a CoARA Working-Group on Multilingualism and Language Biases in Research Assessment. He also is member of the CoARA Steering Board. Karen Stroobants is a postdoctoral researcher in the Centre for Misfolding Diseases at the University of Cambridge and vice president of CoARA. Erzsébet Tóth-Czifra is the program manager of CoARA.

View all posts by Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board

Related Articles

From the University to the Edu-Factory: Understanding the Crisis of Higher Education
Industry
November 25, 2024

From the University to the Edu-Factory: Understanding the Crisis of Higher Education

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Deciphering the Mystery of the Working-Class Voter: A View From Britain
Insights
November 14, 2024

Deciphering the Mystery of the Working-Class Voter: A View From Britain

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Julia Ebner on Violent Extremism

Julia Ebner on Violent Extremism

As an investigative journalist, Julia Ebner had the freedom to do something she freely admits that as an academic (the hat she […]

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do

Emerson College Pollsters Explain How Pollsters Do What They Do

As the U.S. presidential election approaches, news reports and social media feeds are increasingly filled with data from public opinion polls. How […]

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

With over 50 countries around the world holding major elections during 2024 it has been a hugely significant year for democracy as […]

Read Now
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
smith john

Excellent discussion on balancing qualitative and quantitative measures in research assessment. The emphasis on responsible use of bibliometrics alongside peer review is much needed to address systemic biases and ensure fair evaluations. Tools like the Norwegian Career Assessment Matrix provide practical insights—could CoARA consider creating similar frameworks tailored for interdisciplinary research?