NAS Takes Detailed Look at Reproducibility and Replicability
This Tuesday at 9 a.m. the National Academies of Sciences, Engineering, and Medicine will host a symposium in response to the recent report, Reproducibility and Replicability in Science. The 200-page report attempts to define reproducibility and replicability, explore issues related to these topics, and investigate how these topics might impact public confidence in science.
Both the academic and popular media have voiced concerns about reproducibility and replicability in recent years. As The Atlantic noted for one discipline, “Psychology’s Replication Crisis is Running Out of Excuses.” The attention isn’t a bad thing: reproducibility and replicability deserve the spotlight now more than ever as new forums and mediums for data collection are arising and research methods are rapidly transforming through new technologies.
For the sake of having knowledge about the world, scientific validity is of utmost importance. But scientific validity is even more important when scientific studies become the basis of policy or have an otherwise direct effect on human affairs or human well-being. Think of products that may have been mis-marketed due to flawed research (cigarettes), or legislation based on inconsistent findings. You most certainly wouldn’t want to ban night-flights if one study of four pilots revealed that they became exceptionally tired at 2 to 3 a.m. First and foremost, the research methodology would be questionable. A sample size of four? But, more importantly, individual studies which have not been replicated (or which prove non-replicable) are suspect vessels to inform legislation.
In seeking to improve the reproducibility and replicability of studies, the report begins by defining the two: “We define reproducibility to mean computational reproducibility—obtaining consistent computational results using the same input data, computational steps, methods, and code, and conditions of analysis; and replicability to mean obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.”
The report offers researchers, academic journalists, funding agencies, science foundations, etc., procedures to ensure the reproducibility and replicability of research. “Researchers should take care to estimate,” the report leads, “and explain the uncertainty inherent in their results, to make proper use of statistical methods, and to describe their methods and data in a clear, accurate, and complete way.”
There is a distinct possibility that the inability to repeat a study may have widespread consequences– public confidence in the sciences may be undermined. However, when a scientific effort fails to independently confirm the computations or results of a previous study, some may see a lack of rigor, while others argue it can presage new discovery.