Career

Giving Reviewers Some Credit: The R-index

May 27, 2015 3644

Spotlighted but anonymous

So we want to spotlight someone’s effort, but still keep them anonymous …

Peer review is a pervasive necessity in academia. Often illustriously proclaimed to be ‘the gatekeepers of truth’, reviewers are supposed to ensure scientific quality by scrutinizing new research prior to publication. So as productive early career scientists, we spend part of our time diligently reviewing manuscripts for journals. We do so with the understanding that others will do the same when we submit papers ourselves.

Lamentably, this doesn’t always work out as nicely as it sounds. It turns out that many academics do not do their fair share of reviews (Petchey et al 2014). Furthermore, editors have a hard time finding appropriate and effective reviewers to agree to conduct them; when they do agree, reviewers are often late or limited in their constructive commentary; and in many cases, the number of reviewers, editors, or sometimes even journals, actually outnumber the authors (Hochberg et al 2009). All of this combined has lead the broader community to conclude that the current system of peer-review is broken.

LSE Impact logo

This article by Shane Gero and Maurício Cantor originally appeared on http://blogs.lse.ac.uk/impactofsocialsciences/ as “Passing Review: how the R-index aims to improve the peer-review system by quantifying reviewer contributions” and is reposted under the Creative Commons license (CC BY 3.0).

One could easily point the finger at anonymity as being largely responsible for the maintenance of such bad reviewer behavior in our community. Economists, sociologist, and even primatologists, will tell you that pro-social norms are maintained by the threat of a public requital. Blind review allows poor and unethical reviewing to continue. A recent example is the sexist comments to two female co-authors suggesting that male authors would improve the manuscript. This prompted the journal to “remove” both the reviewer from there database and the editor from their position. Yet the only punishment faced by the anonymous reviewer is that they will get fewer review requests; something that many might strive for if success is measured solely by publications. As a result, even though there is evidence that open knowledge of reviewers’ identities results in the production of higher quality and more courteous reviews which are more likely to recommend publication, there is still strong resistance to open review.

So how then do we move forward without having to completely revolutionize our current system of review and rebuild a journal culture from the bottom up? The answer to this question has been hotly debated in the coffee rooms of many academic institutions since long before either of us was born. Over the years, editors have tried both to punish reviewers by delaying publication of their subsequent manuscripts; as well as the exact opposite, to reward conscientious reviewers for their efforts by ensuring speedy review of their work. They have even considered paying cash for reviews! Yet, despite these attempts, there is still no widely accepted method to credit reviewers for their time and expertise.

In the era of Scientometrics, the science of measuring and analyzing science, academics have shown a growing interest in measuring productivity and reputation. Although there is a spectrum from condemnation to praise within the community for the use of metrics, they seem unavoidable shortcuts to evaluate an academic’s productivity. How many of us haven’t Googled our peers H-index? As a result, we believe that simply giving citable recognition to reviewers can improve the peer-review system, by encouraging more participation but also higher quality, constructive input, without the need for a loss of anonymity.

In our recent paper in Royal Society Open Science, we outline the R-index, a straightforward way of quantifying contributions through review. The R-index aims to track scientists’ efforts as reviewers, accounting not only for the quantity of reviewed manuscripts, but also the length of the manuscripts as a proxy for effort, impact factor (IF) of the journal as a proxy for standing in the field, and, perhaps most importantly, a quality score based on the editor’s feedback on the punctuality, utility and impact of the reviews themselves. The quality control built into R-index allows editors to quantify how useful the review was to the decision to publish, but also on how constructive the commentary was for the authors, and what should be the most basic of all courtesies, if it was returned on time.


Comment from a Publons adviser

Just to clarify, Publons Merit is not “based entirely on the number of reviews produced”. It incorporates much of what the R-Index proposes: Merit is gained if reviews are endorsed by other users. Merit is also gained if reviews are made open.
Publons also tracks review length, so this could easily be incorporated into the score. But length does not equate to quality, so that’s probably not a good idea.
I’m also baffled by the idea that the R-Index should incorporate impact factors – it seems likely that articles submitted to journals with a lower impact factor would need more comprehensive feedback and thus more effort from the reviewer.
– Chris Sampson


In this way, R-index differs greatly from other efforts to recompense reviewers. We not only hoped to create a tool to assess contributions, but also one which improves the system itself. Perhaps the most well-known metric of review is the website Publons. Publons shares the same principles we do of encouraging participation and transparency of reviews by turning peer review into a measurable research output. However, where Publons has succeeded in creating an easy to use social network and ranking metric for reviewers, its Merit metric is limited in that it is based entirely on the number of reviews produced. As a result, the metric will be biased to career stage (the longer you have been an academic, the more reviews you have been asked to do and the more reviews you are likely to have done), nor can it speak to the utility and quality of the reviews themselves. In contrast, our simulations on the R-index performance demonstrate that R-index is egalitarian across career stages. Hard working early career scientists such as doctoral students and post-docs, who tend to complete the bulk of the reviewing load in their field, score as well as leading scientists who review only for high impact, multidisciplinary journals. Furthermore, R-index is difficult to game. No one is able to produce a number of poor reviews quickly due to the quality score parameter or a large number of reviews for predatory, or less reputable, journals as the IF of the journal is considered.

On the whole R-index provides several useful benefits to the academic community:

  1. R-index quantifies contributions as a reviewer. Academic productivity can no longer justifiably be based solely on publications. Relating an academics output as an author (H-index) and a reviewer (R-index) allows for a more holistic view of their contributions.
  2. R-index creates transparency. By its design it encourages journals to make basic data on reviewers available. It is not necessary for a specific connection between reviewers and papers to be made, but simply the disclosure of the number, length, and the quality score of reviews by each academic.
  3. R-index judges journals based on utility to authors not alleged impact. A journal’s averaged R-index across its pool of reviewers adds a metric to assess a journal as a practical and efficient service in the publication process.
  4. R-index makes life easier for editors. Another unquantified and thankless endeavor in academia, R allows editors to manage their stable of reviewers and avoid poor, ineffective or repeatedly late academics – late alone blatantly sexist ones.
  5. R-index is easy to implement. Its simple data that most journals already collect and its calculation can be automated by existing cross-publisher databases including Google Scholar, Scopus, or Web of Science. We are currently working with publishers and databases to implement the calculation of R across fields.

High R-scored reviewers are our community’s unheralded pillars; our proposed metric emphasizes this aspect of scientists’ productivity, increasing their visibility without revolutionizing the peer-review system. The R-index subtly shifts the perception of conducting reviews from an unrecognized obligation to a measurable contribution to one’s academic profile and the progress of their respective field.

Postscript: One of the reasons we chose to submit our metric to Royal Society Open Science is that it supports open review; and, if you’re interested, you can see the entire review of our R-index manuscript online alongside our publication here.


Shane Gero is an FNU Research Fellow in the Marine Bioacoustics Lab in the Department of Zoophysiology at Aarhus University in Denmark. For the last ten years, his research has driven The Dominica Sperm Whale Project, a long-term study on wild sperm whale social structure and communication. He tweets at @sgero and you can follow his research program @DomWhale. Mauricio Cantor is a doctoral candidate in the Biology Department at Dalhousie University, Canada. His research focuses on the ecology of interactions between individuals and between species, lying in the interface of fields such as behavioral, population and community ecology.

View all posts by Shane Gero and Maurício Cantor

Related Articles

Where Did We Get the Phrase ‘Publish or Perish’?
Communication
August 14, 2024

Where Did We Get the Phrase ‘Publish or Perish’?

Read Now
AI Upskilling Can and Should Empower Business School Faculty
Higher Education Reform
July 10, 2024

AI Upskilling Can and Should Empower Business School Faculty

Read Now
Reflections of a Former Student Body President: ‘Student Government is a Thankless Job’
Insights
July 1, 2024

Reflections of a Former Student Body President: ‘Student Government is a Thankless Job’

Read Now
Felice Levine to Leave AERA in 2025
Announcements
June 25, 2024

Felice Levine to Leave AERA in 2025

Read Now
Karine Morin Takes Helm of Canada’s Federation for the Humanities and Social Sciences

Karine Morin Takes Helm of Canada’s Federation for the Humanities and Social Sciences

Karine Morin, whose experience in the policy world spans health and health research, the physical sciences and equity, diversity, and inclusion, has been named the new president and CEO of Canada’s Federation for the Humanities and Social Sciences

Read Now
Universities Should Reimagine Governance Along Co-Operative Lines

Universities Should Reimagine Governance Along Co-Operative Lines

Instead of adhering to a corporate model based on individual achievement, the authors argue that universities need to shift towards co-operative governance that fosters collaborative approaches to teaching and research

Read Now
Striving for Linguistic Diversity in Scientific Research

Striving for Linguistic Diversity in Scientific Research

Each country has its own unique role to play in promoting greater linguistic diversity in scientific communication.

Read Now
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments