The Challenge of Regulating Research to Avoid Fraud
Most scientists and medical researchers behave ethically. However, in recent years, the number of high-profile scandals in which researchers have been exposed as having falsified their data raises the issue of how we should deal with research fraud.
There is little scholarship on this subject that crosses disciplines and engages with the broader phenomenon of unethical behavior within the domain of research.
This is partly because disciplines tend to operate in their silos and because universities, in which researchers are often employed, tend to minimize adverse publicity.
When scandals erupt, embarrassment in a particular field is experienced for a short while – and researchers may leave their university. But few articles are published in scholarly journals about how the research fraud was perpetrated; how it went unnoticed for a significant period of time; and how prevalent the issue is.We need to start to look more deeply into who engages in research misconduct; why it happens; what the patterns of such behavior are; how we can identify it; and how we can deter it.
Recent exposures have brought home the point in a confronting way. Two of them are of particular relevance to Australia.
Recent cases of research misconduct
In April 2016, a former University of Queensland professor, Bruce Murdoch, received a two-year suspended sentence after pleading guilty to 17 fraud-related charges. A number of them arose from an article he published in the European Journal of Neurology which asserted a breakthrough in the treatment of Parkinson’s disease.
The sentencing magistrate found that Murdoch forged consent forms for study participants and that his research was “such as to give false hope to Parkinson’s researchers and Parkinson’s sufferers.”
She found that there was no evidence at all that Murdoch had conducted the clinical trial on which his purported findings were based.
Murdoch’s plea of guilty and evidence that he was suffering from severe depression and dealing with a cancer diagnosis were factors that resulted in his jail sentence being suspended.
In 2015, Anna Ahimastos, who was employed at the Baker IDI Heart and Diabetes Institute in Melbourne, admitted to fabricating research on blood pressure medications published in two international journals.
The research purported to establish that for patients with peripheral artery disease (PAD), intermittent claudication (a condition in which cramping pain in the leg is induced by exercise) treatment with a particular drug resulted in significant improvements.
It had significant ramifications for treatment of PAD and, presumably not coincidentally, also for uptake of the drug. Ahimastos’ research was later retracted from the Journal of the American Medical Association following an internal investigation by the Baker Institute. However, while she lost her employment, she was not criminally charged.
In recent years, a series of other research fraud cases have been reported around the world, such as that involving anesthesiologist Scott Reuben, who faked at least 21 papers on his research on analgesia therapy.
His work sought to encourage surgeons to move away from the first generation of non-steroidal anti-inflammatories (NSAIDs) to multi-modal therapy utilizing the newer COX-2 inhibitors.
Reuben was a prominent speaker on behalf of large pharmaceutical companies that produced the COX-2 drugs. However, after it emerged that he had forged the name of an alleged co-author and that in a study that purported to have data in relation to 200 patients, no data existed at all, he was charged with criminal fraud in relation to spurious research between 2000 and 2008.
He was sentenced to six months’ imprisonment after the plea on his behalf also emphasized the toll that the revelations had taken upon his mental health.
In 2015, in the most highly publicized criminal case in the area so far, biomedical scientist Dong Pyou-Han, at Iowa State University, was sentenced to 57 months’ imprisonment for fabricating and falsifying data in HIV vaccine trials. He was also ordered to pay back US$7.2 million to the government agency that funded his research.
Harm caused by fake research
More commonly, though, instances of comparable fraud have not resulted in criminal charges – in spite of the harm caused.
In the Netherlands, for instance, over 70 articles by social psychology celebrity psychologist Diederik Stapel were retracted. His response was to publish a book, entitled in English Derailment, “telling all” about how easy it was to engage in scholarly fraud and what it was that led him to succumb to the temptation to do so.
The book gives memorable insights into the mind of an academic fraudster, including his grandiose aspirations to be the acknowledged leader in his field:
“My desire for clear simple solutions became stronger than the deep emotions I felt when I was confronted with the ragged edges of reality. It had to be simple, clear, beautiful and elegant. It had to be too good to be true.”
And then there’s the notorious case of the Japanese scientist Haruko Obokata, who claimed to have triggered stem cell abilities in regular body cells.
An inability to replicate her findings resulted in an investigation, which revealed not just fraud in her postdoctoral stem cell research, but major irregularities in her doctorate. This resulted in the removal of her doctoral qualification, retraction of the papers, professional disgrace and resignation from her employment.
But the ripple effect was much wider. Her co-author/supervisor committed suicide. There was a large reduction in government funding of the research establishment that employed her. Her line of research into cells that have the potential to heal damaged organs, repair spinal cords and treat diseases such as Alzheimer’s and diabetes was discredited, and grave questions were asked about the academic glitches that allowed her to obtain her PhD.
Despite this, Obokata too published a book denying impropriety and displacing responsibility for her conduct onto others.
Accountability issue
It is easy to dismiss such examples of intellectual dishonesty as aberrations – rotten apples in an otherwise healthy scholarly barrel – or to speak of excessive pressures on researchers to publish.
But there is a wider accountability issue and a cultural problem within the conduct and supervision of research, as well as with how it is published.
A review of the 2,047 retractions listed in PubMed as of May 2012 found that 67.4% were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%) and plagiarism (9.8%).
This does not prove that the incidence of retractions is rising, and it may be that researchers and journal editors are getting better at identifying and removing papers that are either fraudulent or plainly wrong, but it strongly suggests that the checks and balances are too often inadequate until belatedly the problems are exposed.
As for cultural issues, a 2012 survey by the British Medical Journal of more than 2,700 researchers found that 13% admitted knowledge of colleagues “inappropriately adjusting, excluding, altering or fabricating data” for the purpose of publication.
Why are researchers tempted to fake their results?
The temptation for researchers to fake their results can take many forms.
It can be financial – to acquire money, to save money and to avoid losing money. It can be to advance one’s career. It can also be a desire to attract or maintain kudos or esteem, a product of narcissism, and the expression of an excessive commitment to ambition or productivity.
It can be to achieve ascendancy or retribution over a rival. Or it can be the product of anxiety about under-performance or associated with psychiatric conditions such as bipolar disorder.
What all of these motives have in common is that their outcome is intellectual dishonesty that can have extremely serious repercussions.
In 1830, mathematician Charles Babbage classified scientific misconduct into ‘hoaxing’ (making up results, but wanting the hoax at some stage to be discovered), ‘forging’ (fabricating research outcomes), ‘trimming’ (manipulating data) and ‘cooking’ (unjustifiable selection of data)
The check and balance of peer review in publication has, at best, a modest prospect of identifying such conduct.
Peer review
In peer review, the primary data are not made available for the reviewer. All that the reviewer can do is scrutinize the statistics, the research methodology and the plausibility of the interpretation of the data.
If the fraud is undertaken “professionally”, and a study’s results are modestly and sensibly expressed, the reviewer is highly unlikely to identify the problem.
In 1830, the mathematician Charles Babbage classified scientific misconduct into “hoaxing” (making up results, but wanting the hoax at some stage to be discovered), “forging” (fabricating research outcomes), “trimming” (manipulating data) and “cooking” (unjustifiable selection of data).
Experience over the past 20 years suggests that outright forging of results is the most successful mechanism employed by the academically unscrupulous, although those who engage in forging often also tend to engage in trimming, cooking and plagiarism – their intellectual dishonesty tends to be expressed in more than one way.
Removing temptations
The challenges include how we can remove the temptations of such conduct.
Part of the answer lies with clear articulation of proprieties within codes of conduct. But much more is required.
A culture of openness in respect of data needs to be fostered. Supervision and collaboration need to be meaningful, rather than tokenistic. And there needs to be an environment that enables challenge to researchers’ methodologies and proprieties, whether by whistleblowers or others.
Publishers, journal editors and the funders of scholarly research need to refashion the culture of scholarly publication to reduce the practice of gift authorship, whereby persons who have not really contributed to publications are named as authors. The issue here is that multiple authorship can cloud responsibility for scholarly contribution and blur responsibilities for oversight across institutions by ethics committees.
Journals need to be encouraged to be prepared to publish negative results and critiques and analyses of the limitations of orthodoxies.
When allegations are made, they must be investigated in a way that is going to command respect and confidence from all stakeholders. This means that there is much to be said for the establishment of an external, government-funded Office of Scholarly Integrity, based on the model of the US Office of Research Integrity, which is resourced and empowered to investigate allegations of scholarly misconduct objectively and thoroughly.
Finally, there is a role for the criminal law to discourage grossly ethical conduct in research.
Where funders are swindled of their grants, where institutions are damaged by fraud, where research conduct is brazenly faked, such conduct is so serious as to justify the intrusion of the criminal law to punish, deter and protect the good name of scholarly research.