2019
DOI: 10.1111/jasp.12580
|View full text |Cite
|
Sign up to set email alerts
|

A conceptual replication examining the risk of overtly listing eligibility criteria on Amazon’s Mechanical Turk

Abstract: Recent scholarship indicates that explicitly listing eligibility requirements on Amazon’s Mechanical Turk can lead to eligibility falsification. Offering a conceptual replication of prior studies, we assessed the prevalence of eligibility falsification and its impact on data integrity. A screener survey collected the summer before the 2016 presidential election assessed political affiliation. Participants were then randomly assigned to be exposed to a second survey link for which they were eligible or ineligib… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 39 publications
0
14
0
Order By: Relevance
“…In addition, researchers have found across multiple studies that between 2.2 and 28 percent of participants on MTurk misrepresent their qualifications, even after utilizing best practice measures to ensure data quality (MacInnis et al, 2020). Fraudulent and dishonest behavior is even higher when researchers have over‐restrictive participation criteria or aim to recruit specialist samples such as full‐time employees, entrepreneurs or professionals from a certain industry (Chandler & Paolacci, 2017; Siegel & Navarro, 2019; Siegel, Navarro, & Thomson, 2015; Wessling, Huber, & Netzer, 2017).…”
Section: Challenges Of Using Online Platforms For Data Collectionmentioning
confidence: 99%
“…In addition, researchers have found across multiple studies that between 2.2 and 28 percent of participants on MTurk misrepresent their qualifications, even after utilizing best practice measures to ensure data quality (MacInnis et al, 2020). Fraudulent and dishonest behavior is even higher when researchers have over‐restrictive participation criteria or aim to recruit specialist samples such as full‐time employees, entrepreneurs or professionals from a certain industry (Chandler & Paolacci, 2017; Siegel & Navarro, 2019; Siegel, Navarro, & Thomson, 2015; Wessling, Huber, & Netzer, 2017).…”
Section: Challenges Of Using Online Platforms For Data Collectionmentioning
confidence: 99%
“…This includes screenshots of human intelligence tasks (HITs), study eligibility criteria, embedded attention checks and survey links. Among all five intervention studies that our team has conducted on MTurk to date, we were able to find evidence of this information sharing on TurkOpticon, Reddit, MTurkCrowd, TurkView and mturkforum [7][8][9][10]. Therefore, it is recommended that researchers monitor these common websites regularly during recruitment.…”
Section: Declaration Of Interestsmentioning
confidence: 84%
“…We offer four practical solutions to address this concern. The first solution is to avoid providing explicit eligibility criteria when recruiting through MTurk [8]. This approach requires a delicate balance between providing participants with sufficient information to make an informed decision without making the eligibility criteria obvious.…”
Section: Figurementioning
confidence: 99%
See 1 more Smart Citation
“…MTurk staff claims over 200,000 respondents in the United States (Robinson et al, 2019). Of those, estimates for African American respondents vary between 5.8% and 8.1% (Jeong et al, 2019;Siegel & Navarro, 2019;Walter et al, 2019). MTurk is more reliable than other online panels when respondents are experienced, there are checks to ensure that respondents are paying attention, and respondents are not allowed to re-take surveys (Christenson & Glick, 2013;Hauser & Schwarz, 2016;Kees et al, 2017a;Paas et al, 2018;Paolacci & Chandler, 2014).…”
Section: Samplementioning
confidence: 99%