Impact

Paper on Amazon’s Mechanical Turk Proves a Durable Article

August 5, 2022 2813

Being the first, or at least among the first, is generally an advantageous position in most endeavors. And so it proves in SAGE Publishing’s annual 10-Year Impact Awards, where a 2011 paper on Amazon’s new and innovative Mechanical Turk, which among other things crowdsources prospective participants for social and behavioral research via an online marketplace, has garnered 7,500 citations in the subsequent decade. 

That makes it the most-cited paper appearing in a SAGE-published journal in 2011. SAGE (the parent of Social Science Space) started the 10-Year Impact Awards in 2020 as one way to demonstrate the value of social and behavioral science. While article citations and journal impact factors are the standard measure of literature-based impact in academia, these measures’ two- or five-year window don’t account for papers whose influence grows over time or that are recognized at a later date. This is especially acute in the social sciences, where impact factors “tend to underestimate” the value of social science research because of time lags and social science’s interest in new approaches, rather than solely iterative ones.  

One such new approach was MTurk, as the Amazon platform is known. “Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data?” in Perspectives on Psychological Science describes the then-potential contributions of MTurk to the social sciences. The authors — Michael Buhrmester, Tracy Kwang and Samuel D. Gosling, all then at the University of Texas, Austin, Department of Psychology – found that “overall, MTurk can be used to obtain high-quality data inexpensively and rapidly.” 

As part of series talking with the authors of this year’s winners, we asked Buhrmester, the lead author of the paper, to reflect on the impact this article has had in the decade since it appeared. 

Michael Buhrmester

In your estimation, what in your research – and obviously the published paper – is it that has inspired others or that they have glommed onto?  

The paper, along with several others that were published around the same time, served a few very practical purposes for researchers. First, it introduced a relatively new at the time platform for collecting data online with a relatively low barrier of entry. Collecting data online in the pre-MTurk era was certainly possible for researchers, but there wasn’t an efficient ‘plug and play’ approach like what MTurk offered. Second, our evaluation of the quality of the data at that period of time in essence suggested that it was as defensible a source of data as other common sources. Along with other evaluations coming to similar conclusions, I believe this had the effect of reducing some unfounded skepticism and fears of online data collection methods generally. Last, and most importantly, I believe the paper helped spark a more substantial, continuous evaluation of not just MTurk and platforms like it, but of all the major data collection methods utilized by social scientists.  

What, if anything, would you have done differently in the paper (or underlying research) if you were to go back in time and do it again?  

I’ve found that the paper is often cited as a comprehensive defense for any use of MTurk for data collection, and that’s a shame because there’s been so many more-thorough evaluations in the ensuing years. These evaluations have uncovered a host of issues and solutions, many of which would’ve been tough to anticipate. However, I do wish we’d have more actively conveyed the equivalent of “Warning: Conditions May Change Quickly” to encourage researchers to get the most up-to-the-minute evaluations.  

What direct feedback – as opposed to citations – have you received in the decade since your paper appeared?  

Over the years, I’ve personally responded to well over 1,000 emails from researchers at all career stages from all over the world covering just about any MTurk-related question one could imagine. I’ve learned (and forgotten) more about MTurk than I’d imagined while drafting the manuscript.  

 How have others built on what you published? (And how have you yourself built on it?)  

Beyond the proliferation of other online data platform evaluations, ambitious teams have built better mouse traps over the years (e.g., Prolific). Beyond being a part of the community of researchers engaged in issues related to online methodologies, my program of research largely focused on my true passion and area of expertise – uncovering the causes and consequences of identity fusion with my grad advisor, Bill Swann, and post-doc advisor, Harvey Whitehouse.   

Could you name a paper (or other scholarly work) that has had the most, or at least a large, impact on you and your work? 

Bill Swann’s seminal work on Self-Verification Theory (pick any paper from the late 70s to early 90s) is why I became a social psychologist and is a foundation upon which pretty much all of my work (a lot of it with Bill!) rests.  

Lastly, since it seems you have all gone on different routes since writing the paper, I’d love to know what you have done in the last decade. 

After a fruitful and adventure-filled post-doc working jointly at the University of Texas and Oxford, a couple years ago I took the leap out of academia. I now work as a quantitative researcher at Gartner, applying social psychology to the world of generating actionable insights for business leaders. 


An interview with the lead author of the third most-cited article in the 10-year Impact Awards, on the Danish National Patient Register, appears here.

Sage, the parent of Social Science Space, is a global academic publisher of books, journals, and library resources with a growing range of technologies to enable discovery, access, and engagement. Believing that research and education are critical in shaping society, 24-year-old Sara Miller McCune founded Sage in 1965. Today, we are controlled by a group of trustees charged with maintaining our independence and mission indefinitely. 

View all posts by Sage

Related Articles

Young Scholars Can’t Take the Field in Game of  Academic Metrics
Infrastructure
December 18, 2024

Young Scholars Can’t Take the Field in Game of Academic Metrics

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Alondra Nelson Named to U.S. National Science Board
Announcements
October 18, 2024

Alondra Nelson Named to U.S. National Science Board

Read Now
Viewing 2024 Economics Nobel Through Lens of Colonialism’s Impact on Institutions

Viewing 2024 Economics Nobel Through Lens of Colonialism’s Impact on Institutions

This year’s Nobel memorial prize in economics has gone to Daron Acemoglu and Simon Johnson of the Massachusetts Institute of Technology and […]

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 1232 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments