Research Ethics

Why We’re Encouraging Authors to Share Their Data with Reviewers Research Ethics
"To deposit or not to deposit, that is the question - journal.pbio.1001779.g001" by Roche DG, Lanfear R, Binning SA, Haff TM, Schwanz LE, et al. (2014) - Roche DG, Lanfear R, Binning SA, Haff TM, Schwanz LE, et al. (2014) Troubleshooting Public Data Archiving: Suggestions to Increase Participation. PLoS Biol 12(1): e1001779. doi:10.1371/journal.pbio.1001779. Licensed under CC BY 4.0 via Commons - https://commons.wikimedia.org/wiki/File:To_deposit_or_not_to_deposit,_that_is_the_question_-_journal.pbio.1001779.g001.png#/media/File:To_deposit_or_not_to_deposit,_that_is_the_question_-_journal.pbio.1001779.g001.png

Why We’re Encouraging Authors to Share Their Data with Reviewers

May 9, 2017 2187

Data sharing cartoon

Psychological Science cover
The journal Psychological Science is taking steps to encourage would-be authors to give reviewers easy access to the data underlying the analyses reported in their manuscripts. This is part of a wider effort to promote transparency and replicability in works published in the journal. I discussed the rationale for encouraging authors to share data and materials in a recent editorial, “Sharing Data and Materials in Psychological Science.” Here I briefly highlight some of the principle points.

At the recent International Convention of Psychological Science, University of California, Davis psychologist Simine Vazire quoted the motto of the Royal Society, perhaps the oldest learned society for science: Nullius in verba, “On no one’s word.” That is to say, “No offense, mate, but show me the data.”

Science is rooted in data. Scientists use reason and rhetoric and a variety of other tools, but what makes science ‘science’ is that data are the ultimate arbitrators of claims. Yet data are themselves subject to interpretation, and the summary statistics produced by data analysis do not always accurately capture the true nature of a data set. To give a simple example, if a set of scores has a bimodal distribution, then the mean and standard deviation of those scores do not accurately represent the distribution. Moreover, inferential statistical tests used to assess the “statistical significance” of results (e.g., the likelihood that two sets of scores would differ to the observed extent by chance alone) entail various assumptions about the data. When the assumptions are violated, the interpretation of such tests can be badly compromised.

In short, the devil is in the details of the data. It follows that reviewers of manuscripts submitted for publication can do a better job of assessing the extent to which the data support the manuscript’s claims if they can examine the data.

Psychological Science is not requiring submitting authors to share data as part of the review process, but we are strongly encouraging such sharing. Specifically, authors are asked to share their data with reviewers or to explain why they are not sharing it. This fits with the Transparency and Openness Promotion (TOP) guidelines developed and championed by the Center for Open Science, which have been endorsed by more than 2,900 journals and organizations, including the Association for Psychological Science. Our submission procedure also satisfies the Peer Reviewers’ Openness Initiative.

It is not easy to create a data and analysis archive that is so clearly explained and well document that an outsider can understand it. It’s not easy to be a good scientist. Planning for and developing an archive for data and analyses should be a standard part of conducting a hypothesis-testing project. Best practice is to preregister your plan for a project before you begin the hypothesis-testing phase. For further information about preregistration, see the Observer article I coauthored with Dan Simons and Scott Lilienfeld, “Research Preregistration 101.”

What constitutes “the data” is not always clear. Generally, authors do not provide the rawest form of the data, and indeed often what is provided to reviewers is several steps removed from the rawest form. For example, some studies take as their raw data video recordings of subjects’ behavior, which are later summarized or scored in some way. It is helpful for reviewers (and readers) if samples of such videos are available, but practical and/or ethical constraints might preclude sharing all of the footage and in any case it is the quantitative scores submitted to analysis that are generally most of interest to reviewers. The principle is that authors are encouraged to share the data that will enable reviewers to assess the claims advanced in the manuscript.

The ideal to which I hope to inspire authors is that they post their de-identified data in immutable, time-stamped form in a third-party site (not necessarily with free and open access to the world, just providing some way by which reviewers can access the data). See http://www.re3data.org/. I expect that having done this as part of the review process will greatly increase the likelihood that authors of accepted submissions will make the data available to other psychologists in such a repository post publication.

Helping scientists have easy access to one another’s data should increase the value of those data and the amount learned from them. This is not just a matter of detecting errors or shortcomings, but also of building creatively on prior data (e.g., via alternative analyses that shed new light on the meaning of a data set or mega-analyses that combine data from several studies).

Encouraging authors to share data (and materials) is just the latest step Psychological Science has taken to encourage transparency and replicability in the works it publishes. See, for example, Eric Eich’s 2014 editorial, “Business Not as Usual,” and my 2015 editorial, “Replication in Psychological Science.” There is evidence that these efforts are paying off (e.g., see the figure below from Kidwell et al., 2016).


D. Stephen Lindsay, a professor at the University of Victoria, is a cognitive psychologist working in the broad areas of memory and cognition, with a special interest in determinants of the subjective experience of remembering, source monitoring, and the application of theories concerning these processes to everyday memory phenomena (e.g., eyewitness memory). He is the editor-in-chief of the journal Psychological Science.

View all posts by D. Stephen Lindsay

Related Articles

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
NSF Seeks Input on Research Ethics
Ethics
September 11, 2024

NSF Seeks Input on Research Ethics

Read Now
Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence
Research
August 23, 2023

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

Read Now
Hype Terms In Research: Words Exaggerating Results Undermine Findings

Hype Terms In Research: Words Exaggerating Results Undermine Findings

The claim that academics hype their research is not news. The use of subjective or emotive words that glamorize, publicize, embellish or exaggerate results and promote the merits of studies has been noted for some time and has drawn criticism from researchers themselves. Some argue hyping practices have reached a level where objectivity has been replaced by sensationalism and manufactured excitement. By exaggerating the importance of findings, writers are seen to undermine the impartiality of science, fuel skepticism and alienate readers.

Read Now
Five Steps to Protect – and to Hear – Research Participants

Five Steps to Protect – and to Hear – Research Participants

Jasper Knight identifies five key issues that underlie working with human subjects in research and which transcend institutional or disciplinary differences.

Read Now
We Developed a Tool to Make Responsible Research and Innovation Easier

We Developed a Tool to Make Responsible Research and Innovation Easier

Stefan de Jong, Michael J. Bernstein and Ingeborg Meijer describe their work developing a tool that helps researchers and research funders to incorporate responsible research and innovation values into their work.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments