Communication

Stop Buying Cobras: Halting the Rise of Fake Academic Papers

July 22, 2024 1780

In the 1800s, British colonists in India set about trying to reduce the cobra population, which was making life and trade very difficult in Delhi. They began to pay a bounty for dead cobras. The strategy very quickly resulted in the widespread breeding of cobras for cash.

This danger of unintended consequences is sometimes referred to as the “cobra effect.” It can also be well summed up by Goodhardt’s Law, named after British economist Charles Goodhart. He stated that, when a measure becomes a target, it ceases to be a good measure.

The Conversation logo
This article by Lex Bouter originally appeared on The Conversation, a Social Science Space partner site, under the title “Fake academic papers are on the rise: why they’re a danger and how to stop them.” This article is based on a presentation given by the lead author at Stellenbosch University, South Africa on February 12, 2024. Natalie Simon, a communications consultant specializing in research and part of the communications team for the 8th World Conference on Research Integrity and also completing an MPhil in Science and Technology Studies at Stellenbosch University, co-authored this article.

The cobra effect has taken root in the world of research. The “publish or perish” culture, which values publications and citations above all, has resulted in its own myriad of “cobra breeding programs.” That includes the widespread practice of questionable research practices, like playing up the impact of research findings to make work more attractive to publishers.

It’s also led to the rise of paper mills — criminal organizations that sell academic authorship. A report on the subject describes paper mills as “(the) process by which manufactured manuscripts are submitted to a journal for a fee on behalf of researchers with the purpose of providing an easy publication for them, or to offer authorship for sale.”

These fake papers have serious consequences for research and its impact on society. Not all fake papers are retracted. And even those that are often still make their way into systematic literature reviews which are, in turn, used to draw up policy guidelines, clinical guidelines, and funding agendas.

How paper mills work

Paper mills rely on the desperation of researchers — often young, often overworked, often on the peripheries of academia struggling to overcome the high obstacles to entry — to fuel their business model.

They are frighteningly successful. The website of one such company based in Latvia advertises the publication of more than 12,650 articles since its launch in 2012. In an analysis of just two journals jointly conducted by the Committee on Publications Ethics and the International Association of Scientific, Technical and Medical Publishers, more than half of the 3,440 article submissions over a two-year period were found to be fake.

It is estimated that all journals, irrespective of discipline, experience a steeply rising number of fake paper submissions. Currently, the rate is about 2 percent. That may sound small. But, given the large and growing amount of scholarly publications it means that a lot of fake papers are published. Each of these can seriously damage patients, society or nature when applied in practice.

The fight against fake papers

Many individuals and organizations are fighting back against paper mills.

The scientific community is lucky enough to have several “fake paper detectives” who volunteer their time to root out fake papers from the literature. Elizabeth Bik, for instance, is a Dutch microbiologist turned science integrity consultant. She dedicates much of her time to searching the biomedical literature for manipulated photographic images or plagiarised text. There are others doing this work, too.

Organizations such as PubPeer and Retraction Watch also play vital roles in flagging fake papers and pressuring publishers to retract them.

These and other initiatives, like the STM Integrity Hub and United2Act, in which publishers collaborate with other stakeholders, are trying to make a difference.

But this is a deeply ingrained problem. The use of generative artificial intelligence like ChatGPT will help the detectives – but will also likely result in more fake papers which are now more easy to produce and more difficult or even impossible to detect.

Stop paying for dead cobras

They key to changing this culture is a switch in researcher assessment.

Researchers must be acknowledged and rewarded for responsible research practices, such as a focus on transparency and accountability, high-quality teaching, good supervision, and excellent peer review. This will extend the scope of activities that yield “career points” and shift the emphasis of assessment from quantity to quality.

Fortunately, several initiatives and strategies already exist to focus on a balanced set of performance indicators that matter. The San Francisco Declaration on Research Assessment, established in 2012, calls on the research community to recognize and reward various research outputs beyond just publication. The Hong Kong Principles, formulated and endorsed at the 6th World Conference in Research Integrity in 2019, encourage research evaluations that incentivize responsible research practices while minimizing perverse incentives that drive practices like purchasing authorship or falsifying data.

These issues, as well as others related to protecting the integrity of research and building trust in it, were discussed during the 8th World Conference on Research Integrity in Athens, Greece in June.

Openness

Practices under the umbrella of “Open Science” will be pivotal to making the research process more transparent and researchers more accountable. Open Science is the umbrella term for a movement consisting of initiatives to make scholarly research more transparent and equitable, ranging from open access publication to citizen science.

Open Methods, for example, involves the pre-registration of a study design’s essential features before its start. A registered report containing the introduction and methods section is submitted to a journal before data collection starts. It is subsequently accepted or rejected based on the relevance of the research, as well as the methodology’s strength.

The added benefit of a registered report is that reviewer feedback on the methodology can still change the study methods, as the data collection hasn’t started. Research can then begin without pressure to achieve positive results, removing the incentive to tweak or falsify data.

Peer review

Peer reviewers are an important line of defense against the publication of fatally flawed or fake papers. In this system, quality assurance of a paper is done on a completely voluntary and often anonymous basis by an expert in the relevant field or subject.

However, the person doing the review work receives no credit or reward. It’s crucial that this sort of “invisible” work in academia be recognized, celebrated, and included among the criteria for the promotion. This can contribute substantially to detecting questionable research practices (or worse) before publication.

It will incentivise good peer review, so fewer suspect articles pass through the process, and it will also open more paths to success in academia – thus breaking up the toxic publish-or-perish culture.

Lex Bouter has a tenured chair in methodology and integrity at the Department of Epidemiology and Data Science of the Amsterdam University Medical Centers and the Department of Philosophy of the Faculty of Humanities of the Vrije Universiteit. He was professor of Epidemiology since 1992 and served his university as its rector between 2006 and 2013. Bouter is currently involved research and teaching on research integrity topics and is the founding chair of the World Conferences on Research Integrity Foundation.

View all posts by Lex Bouter

Related Articles

Exploring the Citation Nexus of Life Sciences and Social Sciences
Industry
November 6, 2024

Exploring the Citation Nexus of Life Sciences and Social Sciences

Read Now
Ninth Edition of ‘The Evidence’: Tackling the Gender Pay Gap 
Communication
October 31, 2024

Ninth Edition of ‘The Evidence’: Tackling the Gender Pay Gap 

Read Now
The Conversation Podcast Series Examines Class in British Politics
Communication
October 25, 2024

The Conversation Podcast Series Examines Class in British Politics

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do
International Debate
October 23, 2024

Emerson College Pollsters Explain How Pollsters Do What They Do

Read Now
Diving Into OSTP’s ‘Blueprint’ for Using Social and Behavioral Science in Policy

Diving Into OSTP’s ‘Blueprint’ for Using Social and Behavioral Science in Policy

Just in time for this past summer’s reading list, in May 2024 the White House Office of Science and Technology Policy (technically, […]

Read Now
Eighth Edition of ‘The Evidence’: How Sexist Abuse Undermines Political Representation 

Eighth Edition of ‘The Evidence’: How Sexist Abuse Undermines Political Representation 

In this month’s issue of The Evidence newsletter, Josephine Lethbridge explores rising levels of abuse directed towards women in politics, spotlighting research […]

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments