Higher Education Reform

Seeking a Research Rating That Isn’t Impact Factor

June 13, 2014 1703

Dora the Explorer

DORA, in this case the people behind the San Francisco Declaration on Research Assessment, is exploring ways to replace the impact factor as a metric for judging science. (Håkan Dahlström) / CC BY 2.0

This article was contributed by the authors of the San Francisco Declaration on Research Assessment: David Drubin (University of California, Berkeley; Molecular Biology of the Cell), Stefano Bertuzzi (American Society for Cell Biology), Michael Marks (Children’s Hospital of Philadelphia; Traffic), Tom Misteli (National Cancer Institute; The Journal of Cell Biology), Mark Patterson (eLife), Bernd Pulverer (EMBO Press), Sandra Schmid (University of Texas Southwestern Medical Center).

***

Scientists, like other professionals, need ways to evaluate themselves and their colleagues. These evaluations are necessary for better everyday management: hiring, promotions, awarding grants and so on. One evaluation metric has dominated these decisions, and that is doing more harm than good.

This metric, called the journal impact factor or just impact factor, and released annually, counts the average number of times a particular journal’s articles are cited by other scientists in subsequent publications over a certain period of time. The upshot is that it creates a hierarchy among journals, and scientists vie to get their research published in a journal with a higher impact factor, in the hope of advancing their careers.

The Conversation logo_AU

This article by David Drubin originally appeared at The Conversation, a Social Science Space partner site, under the title “Time to discard the metric that decides how science is rated”

The trouble is that impact factor of journals where researchers publish their work is a poor surrogate to measure an individual researcher’s accomplishments. Because the range of citations to articles in a journal is so large, the impact factor of a journal is not really a good predictor of the number of citations to any individual article. The flaws in this metric have been acknowledged widely – it lacks transparency and, most of all, it has unintended effects on how science gets done.

A recent study that attempted to quantify the extent to which publication in high-impact-factor journals correlates with academic career progression highlights just how embedded the impact factor is. While other variables also correlate with the likelihood of getting to the top of the academic ladder, the study shows that impact factors and academic pedigree are rewarded over and above the quality of publications. The study also finds evidence of gender bias against women in career progression and emphasizes the urgent need for reform in research assessment.

Judging scientists by their ability to publish in the journals with the highest impact factors means that scientists waste valuable time and are encouraged to hype up their work, or worse, only in an effort to secure a space in these prized journals. They also get no credit for sharing data, software and resources, which are vital to progress in science.

This is why, since its release a year ago, more than 10,000 individuals across the scholarly community have signed the San Francisco Declaration on Research Assessment (DORA), which aims to free science from the obsession with the impact factor. The hope is to promote the use of alternative and better methods of research assessment, which will benefit not just the scientific community but society as a whole.

The DORA signatories originate from across the world, and represent just about all constituencies that have a stake in science’s complex ecosystem – including funders, research institutions, publishers, policymakers, professional organizations, technologists and, of course, individual researchers. DORA is an attempt to turn these expressions of criticism into real reform of research assessment, so that hiring, promotion and funding decisions are conducted rigorously and based on scientific judgements.

We can also take heart from real progress in several areas. One of the most influential organizations that is making positive steps towards improved assessment practices is the US National Institutes of Health. The specific changes that have come into play at the NIH concern the format of the CV or “biosketch” in grant applications. To discourage the grant reviewers focusing on the journal in which previous research was published, NIH decided to help reviewers by inserting a short section into the biosketch where the applicant concisely describes their most significant scientific accomplishments.

At the other end of the spectrum, it is just as important to find individuals who are adopting new tools and approaches in how they show their own contributions to science. One such example is Steven Pettifer, a computer scientist at University of Manchester, who gathers metrics and indicators, combining citations in scholarly journals with coverage in social media about his individual articles to provide a richer picture of the reach and influence of his work.

Another example, as reported in the journal Science, comes from one of the DORA authors, Sandra Schmid at the University of Texas Southwestern Medical Center. She conducted a search for new faculty positions in the department that she leads by asking applicants to submit responses to a set of questions about their key contributions at the different stages in their career, rather than submitting a traditional CV with a list of publications. A similar approach was also taken for the selection of the recipients for a prestigious prize recognizing graduate student research, the Kaluza Prize.

These examples highlight that reform of research assessment is possible right now by anyone or any organization with a stake in the progress of science.

One common feature among funding agencies with newer approaches to research assessment is that applicants are often asked to restrict the evidence that supports their application to a limited number of research contributions. This emphasizes quality over quantity. With fewer research papers to consider, there is greater chance that the evaluators can focus on the science, rather than the journal in which it is published. It would be encouraging if more of these policies also explicitly considered outputs beyond publications and included resources such as major datasets, resources and software, a move made by the US National Science Foundation in January 2013. After all, the accomplishments of scientists cannot be measured in journal articles alone.

There have been at least two initiatives that focus on metrics and indicators at the article level, from US standards’ agency NISO and UK’s higher education body HEFCE. Although moves towards a major reliance on such metrics and indicators in research assessment are premature, and the notion of an “article impact factor” is fraught with difficulty, with the development of standards, transparency and improved understanding of these metrics, they will become valuable sources of evidence of the reach of individual research outputs, as well as tools to support new ways to navigate the literature.

As more and more examples appear of practices that don’t rely on impact factors and journal names, scientists will realize that they might not be as trapped by a single metric as they think. Reform will help researchers by enabling them to focus on their research and help society by improving the return on the public investment in science. The Conversation


David Drubin is a professor of cell and developmental biology at the University of California, Berkeley. The Drubin and Barnes Lab is interested in the molecular mechanisms that underlie highly dynamic actin-mediated membrane trafficking events. These studies are being carried out in mammalian cells and in budding yeast. The approaches employed for these studies include state-of-the-art real-time image analysis of live cells, genome-wide functional analyses, genetics, molecular genetics, and biochemistry.

View all posts by David G. Drubin

Related Articles

Julia Ebner on Violent Extremism
Insights
November 4, 2024

Julia Ebner on Violent Extremism

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do
Communication
October 23, 2024

Emerson College Pollsters Explain How Pollsters Do What They Do

Read Now
Alondra Nelson Named to U.S. National Science Board
Announcements
October 18, 2024

Alondra Nelson Named to U.S. National Science Board

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
Lee Miller: Ethics, photography and ethnography

Lee Miller: Ethics, photography and ethnography

Kate Winslet’s biopic of Lee Miller, the pioneering woman war photographer, raises some interesting questions about the ethics of fieldwork and their […]

Read Now
‘Settler Colonialism’ and the Promised Land

‘Settler Colonialism’ and the Promised Land

The term ‘settler colonialism’ was coined by an Australian historian in the 1960s to describe the occupation of a territory with a […]

Read Now
Webinar: Banned Books Week 2024

Webinar: Banned Books Week 2024

As book bans and academic censorship escalate across the United States, this free hour-long webinar gathers experts to discuss the impact these […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments