Research Ethics

Why We’re Encouraging Authors to Share Their Data with Reviewers Research Ethics
"To deposit or not to deposit, that is the question - journal.pbio.1001779.g001" by Roche DG, Lanfear R, Binning SA, Haff TM, Schwanz LE, et al. (2014) - Roche DG, Lanfear R, Binning SA, Haff TM, Schwanz LE, et al. (2014) Troubleshooting Public Data Archiving: Suggestions to Increase Participation. PLoS Biol 12(1): e1001779. doi:10.1371/journal.pbio.1001779. Licensed under CC BY 4.0 via Commons - https://commons.wikimedia.org/wiki/File:To_deposit_or_not_to_deposit,_that_is_the_question_-_journal.pbio.1001779.g001.png#/media/File:To_deposit_or_not_to_deposit,_that_is_the_question_-_journal.pbio.1001779.g001.png

Why We’re Encouraging Authors to Share Their Data with Reviewers

May 9, 2017 1770

Data sharing cartoon

Psychological Science cover
The journal Psychological Science is taking steps to encourage would-be authors to give reviewers easy access to the data underlying the analyses reported in their manuscripts. This is part of a wider effort to promote transparency and replicability in works published in the journal. I discussed the rationale for encouraging authors to share data and materials in a recent editorial, “Sharing Data and Materials in Psychological Science.” Here I briefly highlight some of the principle points.

At the recent International Convention of Psychological Science, University of California, Davis psychologist Simine Vazire quoted the motto of the Royal Society, perhaps the oldest learned society for science: Nullius in verba, “On no one’s word.” That is to say, “No offense, mate, but show me the data.”

Science is rooted in data. Scientists use reason and rhetoric and a variety of other tools, but what makes science ‘science’ is that data are the ultimate arbitrators of claims. Yet data are themselves subject to interpretation, and the summary statistics produced by data analysis do not always accurately capture the true nature of a data set. To give a simple example, if a set of scores has a bimodal distribution, then the mean and standard deviation of those scores do not accurately represent the distribution. Moreover, inferential statistical tests used to assess the “statistical significance” of results (e.g., the likelihood that two sets of scores would differ to the observed extent by chance alone) entail various assumptions about the data. When the assumptions are violated, the interpretation of such tests can be badly compromised.

In short, the devil is in the details of the data. It follows that reviewers of manuscripts submitted for publication can do a better job of assessing the extent to which the data support the manuscript’s claims if they can examine the data.

Psychological Science is not requiring submitting authors to share data as part of the review process, but we are strongly encouraging such sharing. Specifically, authors are asked to share their data with reviewers or to explain why they are not sharing it. This fits with the Transparency and Openness Promotion (TOP) guidelines developed and championed by the Center for Open Science, which have been endorsed by more than 2,900 journals and organizations, including the Association for Psychological Science. Our submission procedure also satisfies the Peer Reviewers’ Openness Initiative.

It is not easy to create a data and analysis archive that is so clearly explained and well document that an outsider can understand it. It’s not easy to be a good scientist. Planning for and developing an archive for data and analyses should be a standard part of conducting a hypothesis-testing project. Best practice is to preregister your plan for a project before you begin the hypothesis-testing phase. For further information about preregistration, see the Observer article I coauthored with Dan Simons and Scott Lilienfeld, “Research Preregistration 101.”

What constitutes “the data” is not always clear. Generally, authors do not provide the rawest form of the data, and indeed often what is provided to reviewers is several steps removed from the rawest form. For example, some studies take as their raw data video recordings of subjects’ behavior, which are later summarized or scored in some way. It is helpful for reviewers (and readers) if samples of such videos are available, but practical and/or ethical constraints might preclude sharing all of the footage and in any case it is the quantitative scores submitted to analysis that are generally most of interest to reviewers. The principle is that authors are encouraged to share the data that will enable reviewers to assess the claims advanced in the manuscript.

The ideal to which I hope to inspire authors is that they post their de-identified data in immutable, time-stamped form in a third-party site (not necessarily with free and open access to the world, just providing some way by which reviewers can access the data). See http://www.re3data.org/. I expect that having done this as part of the review process will greatly increase the likelihood that authors of accepted submissions will make the data available to other psychologists in such a repository post publication.

Helping scientists have easy access to one another’s data should increase the value of those data and the amount learned from them. This is not just a matter of detecting errors or shortcomings, but also of building creatively on prior data (e.g., via alternative analyses that shed new light on the meaning of a data set or mega-analyses that combine data from several studies).

Encouraging authors to share data (and materials) is just the latest step Psychological Science has taken to encourage transparency and replicability in the works it publishes. See, for example, Eric Eich’s 2014 editorial, “Business Not as Usual,” and my 2015 editorial, “Replication in Psychological Science.” There is evidence that these efforts are paying off (e.g., see the figure below from Kidwell et al., 2016).


D. Stephen Lindsay, a professor at the University of Victoria, is a cognitive psychologist working in the broad areas of memory and cognition, with a special interest in determinants of the subjective experience of remembering, source monitoring, and the application of theories concerning these processes to everyday memory phenomena (e.g., eyewitness memory). He is the editor-in-chief of the journal Psychological Science.

View all posts by D. Stephen Lindsay

Related Articles

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence
Research
August 23, 2023

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

Read Now
Hype Terms In Research: Words Exaggerating Results Undermine Findings
Research Ethics
June 23, 2023

Hype Terms In Research: Words Exaggerating Results Undermine Findings

Read Now
Five Steps to Protect – and to Hear – Research Participants
Research Ethics
January 18, 2023

Five Steps to Protect – and to Hear – Research Participants

Read Now
We Developed a Tool to Make Responsible Research and Innovation Easier
Innovation
June 16, 2022

We Developed a Tool to Make Responsible Research and Innovation Easier

Read Now
COVID-19, Face Masks and Research Integrity

COVID-19, Face Masks and Research Integrity

Robert Dingwall asks if claims about the effectiveness of face masks in stopping COVID consistent with current standards of research integrity.

Read Now
To Study Zika, They Offered Their Kids. Then They Were Forgotten

To Study Zika, They Offered Their Kids. Then They Were Forgotten

“We feel diminished,” says Alessandra Hora dos Santos. “It’s like we were lab rats. They come in nicely, collect information, collect exams on the child, and in the end we don’t know of any results. It’s like we are being used without even knowing why that is being done.”

Read Now
NAS Creates Council to Address Research Integrity and Trust

NAS Creates Council to Address Research Integrity and Trust

A new blue-ribbon council convened by the United States’ National Academies of Sciences, Engineering, and Medicine aims to tackle questions about nettlesome issues like conflict of interest, measuring impact and handling retractions.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments