2018
DOI: 10.2308/isys-52021
|View full text |Cite
|
Sign up to set email alerts
|

Using MTurk to Distribute a Survey or Experiment: Methodological Considerations

Abstract: Amazon Mechanical Turk (MTurk) is a powerful tool that is more commonly being used to recruit behavioral research participants for accounting research. This manuscript provides practical and technical knowledge learned from firsthand experience to help researchers collect high-quality, defendable data for research purposes. We highlight two issues of particular importance when using MTurk: (1) accessing qualified participants, and (2) validating collected data. To address these issues, we discuss alternative m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
64
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 92 publications
(66 citation statements)
references
References 39 publications
2
64
0
Order By: Relevance
“…To test our model, we used a sample indicative of the intended population—lay individuals from the U.S. We collected data from the U.S. working population using Amazon’s Mechanical Turk. We followed the latest best-practice procedures (e.g., Chmielewski and Kucker, 2020 ; Hunt and Scheetz, 2019 ; Kennedy et al., 2018 ) for recruitment and design using online-sourced survey platforms, such as (a) requesting only those from the United States, (b) those with HIT (Human Intelligence Task) ratings > 95%, (c) paying respondents $2.00, and (d) including three attention check questions. After requesting 1000 initial HITs, and only including those passing all three attention checks, our final sample size was 581.…”
Section: Methodsmentioning
confidence: 99%
“…To test our model, we used a sample indicative of the intended population—lay individuals from the U.S. We collected data from the U.S. working population using Amazon’s Mechanical Turk. We followed the latest best-practice procedures (e.g., Chmielewski and Kucker, 2020 ; Hunt and Scheetz, 2019 ; Kennedy et al., 2018 ) for recruitment and design using online-sourced survey platforms, such as (a) requesting only those from the United States, (b) those with HIT (Human Intelligence Task) ratings > 95%, (c) paying respondents $2.00, and (d) including three attention check questions. After requesting 1000 initial HITs, and only including those passing all three attention checks, our final sample size was 581.…”
Section: Methodsmentioning
confidence: 99%
“…Also, participants can use computer algorithms and artificial intelligence bots to complete a study multiple times on crowdsourcing platforms, leading to the possibility of invalid data within a sample (Chmielewski & Kucker, 2020). Concerns related to cheaters may be addressed by selecting a platform that allows for targeted recruitment of highly qualified participants or the use of source coding that limits survey access to only those who have accepted a human intelligence task (Hunt & Scheetz, 2019). Last, all crowdsourcing platforms are not equal, and the data quality, integrity, and composition can differ depending on the vendor (S. M. Smith et al, 2016).…”
Section: Limitations Of Crowdsourcing For Counseling Researchmentioning
confidence: 99%
“…The increasing popularity of fee-based crowdsourcing platforms has encouraged the implementation of strategies to improve data quality by catching speeders and cheaters. Using screening questions, incorporating attention check questions, and restricting participation to workers with a reputation for thoroughness are strategies to increase data quality (Hauser & Schwarz, 2016;Hunt & Scheetz, 2019;Rouse, 2015). A method of preventing cheating includes Qualtrics's Relevan-tID, which assesses participant metadata to detect fraudulent behaviors.…”
Section: Limitations Of Crowdsourcing For Counseling Researchmentioning
confidence: 99%
See 1 more Smart Citation
“…As Wang and Murnighan (2017) note, MTurkers provide data with external validity and quality of equal or better than laboratory and other online platforms (e.g., Buhrmester, Kwang, and Gosling, 2011;Horton, Rand, and Zeckhauser, 2011;Paolacci, Chandler, and Ipeirotis, 2010). We recruited MTurkers using steps outlined by Hunt and Scheetz (2018) to maintain experimental control. For example, we validate responses by providing each participant a unique confirmation code and use a series of pre-experiment screening questions to ensure we only collected data from qualified participants (i.e., office workers with experience interacting with external auditors.…”
Section: Participantsmentioning
confidence: 99%