Crowdsourcing services--particularly Amazon Mechanical Turk--have made it easy for behavioral scientists to recruit research participants. However, researchers have overlooked crucial differences between crowdsourcing and traditional recruitment methods that provide unique opportunities and challenges. We show that crowdsourced workers are likely to participate across multiple related experiments and that researchers are overzealous in the exclusion of research participants. We describe how both of these problems can be avoided using advanced interface features that also allow prescreening and longitudinal data collection. Using these techniques can minimize the effects of previously ignored drawbacks and expand the scope of crowdsourcing as a tool for psychological research.
The Internet has democratized knowledge by lowering barriers to the consumption, dissemination, and creation of knowledge. Although social scientists have long relied on the Internet for data collection, difficulties in recruiting and compensating participants have inhibited data collection online. A Web site called Mechanical Turk (MTurk) has recently offered a solution to these technical challenges.MTurk is an online labor market created by Amazon to assist "requesters" in hiring and paying "workers" for the completion of computerized tasks. Tasks (e.g., transcribing text) are typically completed within minutes and usually pay in cents rather than dollars. Social scientists have recently discovered the potential of the MTurk workforce as a large pool of participants, constantly available to complete research studies at a low cost. Today, it is not uncommon to read empirical articles that are entirely based on data collected using MTurk.With the surge of interest in MTurk as a participantrecruitment tool have come questions regarding its reliability. What are the characteristics of the MTurk population? Why do workers become research participants? Is the data collected on MTurk of adequate quality? Reservations are justified particularly because MTurk is not a participant pool, and it presents researchers with challenges that other pools do not (e.g., how to select participants on the basis of their characteristics). We integrate the available evidence that speaks to whether and how researchers can use MTurk as a data-collection tool. Characteristics of MTurk SamplesWorkers choose to complete MTurk tasks for minimal pay, which raises questions about who they are and why they do so. Although payment is an important factor, selfreports indicate that workers are driven by both extrinsic and intrinsic motives (e.g., workers have reported that they complete tasks "to make basic ends meet" and because "tasks are fun"; Paolacci, Chandler, & Ipeirotis, 2010;Ross, Irani, Silberman, Zaldivar, & Tomlinson, 2010), which suggests that the rewards of working on MTurk are not merely monetary.In 2014, the MTurk workforce is composed of more than 500,000 individuals from 190 countries. Demographic
Data collection in consumer research has progressively moved away from traditional samples (e.g., university undergraduates) and toward Internet samples. In the last complete volume of the Journal of Consumer Research (June 2015–April 2016), 43% of behavioral studies were conducted on the crowdsourcing website Amazon Mechanical Turk (MTurk). The option to crowdsource empirical investigations has great efficiency benefits for both individual researchers and the field, but it also poses new challenges and questions for how research should be designed, conducted, analyzed, and evaluated. We assess the evidence on the reliability of crowdsourced populations and the conditions under which crowdsourcing is a valid strategy for data collection. Based on this evidence, we propose specific guidelines for researchers to conduct high-quality research via crowdsourcing. We hope this tutorial will strengthen the community’s scrutiny on data collection practices and move the field toward better and more valid crowdsourcing of consumer research.
Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional subject pools. We further discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject pool.
In this chapter, we outline the common concerns with MTurk as a participant pool, review the evidence for those concerns, and discuss solutions. We close with a Table of considerations that researchers should make when fielding a study on MTurk
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.