Research Ethics

The Cultural Roots of the Latest Big Retraction Research Ethics
"I see see a train wreck looming," economist Daniel Kahneman has said of social priming research. (Photo: Wikimedia)

The Cultural Roots of the Latest Big Retraction

May 29, 2015 1336
Train wreck
“I see see a train wreck looming,” economist Daniel Kahneman has said of social priming research. (Photo: Wikimedia)

The preeminent journal Science announced Thursday that it is retracting a paper on changing attitudes towards gay marriage that it published with great fanfare less than six months ago.

The paper, by University of California Los Angeles graduate student Michael LaCour and Columbia political scientist professor Donald Green, garnered a lot of media attention when it was published, reaching a wide audience thanks to a segment on the radio show This American Life.

This is likely in part due to the intensely topical content of the paper, which found that subjects in California were more inclined to become supportive of same-sex marriage if canvassed by a gay individual.

The Conversation logo
This article by Jonathan Borwein originally appeared at The Conversation, a Social Science Space partner site, under the title “The ‘train wreck’ continues: another social science retraction”

But when another group of researchers attempted to run a follow-up study, they got very different results. When they looked more closely at the original paper, they found irregularities, including finding the research company commissioned by the original authors had no knowledge of the study.

Since then, Green confronted his co-author LaCour and requested the original data, only to have LaCour confess to “falsely describing at least some of the details of the data collection”.

This prompted Green to request a retraction from Science with the publication’s editor-in-chief Marcia McNutt saying:

[…] Green requested that Science retract the paper because of the unavailability of raw data and other irregularities that have emerged in the published paper.

The publication acknowledged that LaCour has not agreed to the retraction, and he has said he is preparing a “definitive response” of his own on the matter.

Another train wreck

This raises at least as many questions as it answers. At the time of publication Green crowed:

The change was equivalent to transforming a Midwesterner into a New Englander on the issue of gay marriage.

Now it appears he had never seen the data. Yet he is a senior academic and LaCour only a post graduate student, albeit at a top notch university.

The broader issues of reproducibility and of referee- or community-access to the underlying experimental data also arise again, as they did in the Reinhard and Rogoff spreadsheet scandal from 2013.

In social science research, as in research more generally, there is a continuum of error from typo and honest mistake, through inadvertent or time-constrained lack of fact checking, to deliberate fraud.

Deliberate fraud of the truth-enhancing kind is amusingly described in the book Free Radicals. The author, Michael Brooks, makes the case that great science rarely plays by the rules and that great scientists – Tesla and the University of Western Australia’s Barry Marshall) – are often so convinced of the truth and value of what they are doing that they play hard and loose with approved methodology and the standard rules of the scientific enterprise.

The current imbroglio has some of this flavor; LaCour is openly gay and appears to have had no desire to be seen as at arms-length from his research.

Social psychology has seen more than its fair share of all of this. It has rightly been described as a “train wreck” by the eminent psychologist and behavioural economist Dan Kahneman.

So what are the deeper causes and implications of this most recent case of fraud in the social sciences? Four points come to mind.

1. Lack of rigour

The lack of rigour or community standards in social psychology is something Dave Bailey and I have previously discussed.

In more robust fields, like astrophysics or mathematics, wholesale bullshit is somewhat harder to publish, and is way harder to get away with, in the long term.

ast month I noted interesting new work on what helps drive content viral. The same features may well make it more likely that certain kinds of fraud (like certain urban myths) pass muster than others.

Does it tell us something surprising that we want to believe? Does it help drop our defenses? There are some famous examples of fraudulent holocaust memoirs. No one saw that coming.

2. Getting away with it

Another problem is the sheer ease of generating fraud, particularly in a community that has never viewed its job as diagnosing fraud rather than assessing interest and novelty.

Much like the current internet, the scholarly publication system was designed for use by a relatively small group of like-minded people, for whom rooting out malignant behavior (excluding Newton-Leibniz priority battles) was not really part of the equation.

I am willing to argue that cheating is still a relatively uncommon phenomenon in academia. Tedious and fatuous publications whose only purpose is the far-from-trivial need for each academic to be credentialed is a larger issue.

At least this is so when academia is compared to cycling, soccer, finance or any other field in which the potential reputational and financial benefits can be enormous.

I am not convinced that the rate of academic fraud is that much greater than a generation ago. But the most flagrant cases such as Diederik Stapel, who preferred inventing his research, make one gasp.

Well designed and executed surveys and opinion polls are valuable, if fraught. But with online tools, like Survey Monkey, no one is more than 30 minutes away from authoring their own poorly designed and implemented social science survey. The old computer science saying “garbage in, garbage out” comes to mind.

3. Benefit versus cost

A further issue is the large upside and often limited perceived downside of faking results.

Fame and fortune apart, there is now a huge industry that produces papers guaranteed to be published in the appropriate part of the food chain – at commensurate prices.

There are clearly more than enough takers to make this lucrative, even when the cost-per-paper can run into the tens of thousands of dollars. Science itself has highlighted the problem of “China’s Publication Bazaar” in gob-smacking detail.

4. Hunger for novelty

Worst of all is the appetite of even the most elite journals, grant councils and universities to always have something cutting-edge and sexy for the media, the public and especially for the decision makers.

This is a story with many co-dependent enablers, and few good guys other than the vast majority of ordinary researchers.

What politician stands up to endorse base-funding for arms-length research when they can target money to something new, exciting, buzzword compliant, and more than likely to go nowhere? Even though, the most valuable university-industry tech transfer is still thought to be Gatorade.

Science and Nature are happy to play the embargo game, and even to break their own rules when they see a possibility for even more lovely coverage.

I know from repeated personal experience in many institutions that whatever my academic masters say about valuing basic research, when push comes to shove, they will drop that bone in favor of the big shiny PR generating reflection in the river below.

Lessons learned

As Daniel Kanheman has argued, the social sciences – whether political science or behavioral economics – must put in place stronger community standards. Access to data and issues of reproducibility need to be embedded in the process.

This is expensive, time consuming and painful. Many smaller journals and academic communities simply do not have the requisite resources. They can not afford to pay reviewers, nor is it clear that payment would not amplify the problem.

But this particular debacle played out in the leading weekly journal of the American Association for the Advancement of Science (AAAS), which has resources on a scale very few other scientific publishers have. The relatively quick unmasking of the fraud can be read as a sign of reasonably good health, though.

Prevention of the underlying disease will be a longer and messier affair. As with doping in sport or internet crime the issues are here to stay.The Conversation

Jonathan Borwein is laureate professor of mathematics at University of Newcastle and director of the Centre for Computer Assisted Research Mathematics and its Applications. He has worked at Carnegie-Melon, Dalhousie, Simon Fraser, and Waterloo Universities and has held two Canada Research Chairs in computing.

View all posts by Jonathan Borwein

Related Articles

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
NSF Seeks Input on Research Ethics
Ethics
September 11, 2024

NSF Seeks Input on Research Ethics

Read Now
Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence
Research
August 23, 2023

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

Read Now
Hype Terms In Research: Words Exaggerating Results Undermine Findings

Hype Terms In Research: Words Exaggerating Results Undermine Findings

The claim that academics hype their research is not news. The use of subjective or emotive words that glamorize, publicize, embellish or exaggerate results and promote the merits of studies has been noted for some time and has drawn criticism from researchers themselves. Some argue hyping practices have reached a level where objectivity has been replaced by sensationalism and manufactured excitement. By exaggerating the importance of findings, writers are seen to undermine the impartiality of science, fuel skepticism and alienate readers.

Read Now
Five Steps to Protect – and to Hear – Research Participants

Five Steps to Protect – and to Hear – Research Participants

Jasper Knight identifies five key issues that underlie working with human subjects in research and which transcend institutional or disciplinary differences.

Read Now
We Developed a Tool to Make Responsible Research and Innovation Easier

We Developed a Tool to Make Responsible Research and Innovation Easier

Stefan de Jong, Michael J. Bernstein and Ingeborg Meijer describe their work developing a tool that helps researchers and research funders to incorporate responsible research and innovation values into their work.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments