Research

Using Prediction Markets to Forecast Replication Across Fields Research
An artist's conception of Human-Machine collaborative design

Using Prediction Markets to Forecast Replication Across Fields

August 27, 2020 1595

The U.S. Department of Defense is trying to find a way to assess and improve the credibility of social and behavioral science research using algorithms. There’s lots to unpack in that statement, but a key question is can machines really determine the credibility of research, ideally without conducting actual replications?

Replication is a key practice in scientific research, playing a role in controlling the impact of sampling error, questionable research practices, publication bias and fraud. Efforts to replicate studies are supposed to help establish credibility within a field (although the bravery of re-testing foundational findings can sometimes create a crisis mentality when too many fail to measure up). As the research community can offer its take about outcomes, we ask again, could those outcomes be forecast formally without actually conducting replications?

DARPA, the Pentagon’s Defense Advanced Research Projects Agency, funded the ‘Systematizing Confidence in Open Research and Evidence’ Program to answer these questions. SCORE aims to generate confidence scores for research claims from empirical studies in the social and behavioral sciences. The confidence scores will provide a quantitative assessment of how likely a claim will hold up in an independent replication, helping DARPA pick winners for funding and application in the field.

In a paper published by Royal Society Open Science, a team of researchers ask a more detailed question of the process, “Are replication rates the same across academic fields? Community forecasts from the DARPA SCORE programme.” Authors Michael Gordon, Domenico Viganola, Michael Bishop, Yiling Chen, Anna Dreber, Brandon Goldfedder, Felix Holzmeister, Magnus Johannesson, Yang Liu, Charles Twardy, Juntao Wang, and Thomas Pfeiffer used prediction markets and surveys of the research community to forecast replication outcomes for material of interest to DARPA. This was the authors’s second look at using prediction markets and surveys to predict replication; their initial proof-of-principle lead them to elicit information on a large set of research claims with only a few actually being replicated.

Most of the participants making predictions were drawn from academia, from undergraduate to emeritus professor roles, although the majority were in early or mid-career. Their disciplines fell into six groupings: economics, political science, psychology, education, sociology and criminology, and marketing, management and related areas, and respondents tended to focus their predictions on their own fields.

The paper reports that the participants expect replication rates to differ between fields, with the highest replication rate in economics and the lowest in psychology and education. Participants interested in economics were more optimistic about the replication rates in economics than those not in economics. No evidence was found for this effect with other topics. Moreover, participants who stated that they have been involved in a replication study before on average forecast a lower overall replication rate.

The authors note that participants expected replication rates to increase over time. “One plausible explanation might be that participants expect recent methodological changes in the social and behavioural sciences to have a positive impact on replication rates,” according to the paper. “This is also in line with an increased creation and use of study registries during this time period in the social and behavioural science.”

The forecasts presented in this study focus on field-specific and time-specific replication rates rather than the probability of replication for individual claims. Explicit forecasts for overall replication rates might be more reliable than what can be inferred from forecasts on individual replications.

See the original article here.

.

Kenalyn Ang is the social science communications intern with SAGE Publishing. She is also a communications student at the USC Annenberg School. Her research focuses on consumer behavior, identity-making and representation through the written word, and creative sensory marketing and branding strategies.

View all posts by Kenalyn Ang

Related Articles

Covid-19 and the Crisis of Legitimacy
News
March 30, 2025

Covid-19 and the Crisis of Legitimacy

Read Now
How Science Can Adapt to a New Normal
Public Policy
March 14, 2025

How Science Can Adapt to a New Normal

Read Now
The Mystery of Xenotransplantation for Social Sciences
Innovation
February 27, 2025

The Mystery of Xenotransplantation for Social Sciences

Read Now
Nominations Open For 2025 John Maddox Prize for Promoting Evidence-Based Research
Recognition
February 21, 2025

Nominations Open For 2025 John Maddox Prize for Promoting Evidence-Based Research

Read Now
NOMIS & Science Young Explorer Award: Now Accepting Applications

NOMIS & Science Young Explorer Award: Now Accepting Applications

Applications are open for the annual NOMIS & Science Young Explorer Award, which recognizes early-career M.D., Ph.D., or M.D./Ph.D. scientists who have conducted research that connects the social and life sciences.

Read Now
What Does RFK’s Confirmation Tell Us About the US and Health Care?

What Does RFK’s Confirmation Tell Us About the US and Health Care?

The constitutional processes are now complete and Robert F Kennedy, Jr. has been confirmed as U.S. Secretary for Health and Human Services […]

Read Now
Survey Says … Most People Trust Scientists

Survey Says … Most People Trust Scientists

Public trust in scientists is vital. It can help us with personal decisions on matters like health and provide evidence-based policymaking to […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest


This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments