2019
DOI: 10.1016/j.paid.2019.02.015
|View full text |Cite
|
Sign up to set email alerts
|

Noncompliant responding: Comparing exclusion criteria in MTurk personality research to improve data quality

Abstract: General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or comm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
80
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 99 publications
(80 citation statements)
references
References 25 publications
0
80
0
Order By: Relevance
“…Participants (N = 332, M age = 37.07, SD age = 11.36, 36% female, 79% United States) 1 were recruited from Amazon's MTurk and provided monetary compensation. Prior research has supported the validity of findings obtained from MTurk participants, and we applied exclusion guidelines from these sources to ensure sufficient data quality ( Barends & de Vries, 2019 ; Buchheit, Dalton, Pollard, & Stinson, 2019 ). Only participants that had completed more than 50 MTurk tasks with greater than 95% lifetime approval were included.…”
Section: Methodsmentioning
confidence: 99%
“…Participants (N = 332, M age = 37.07, SD age = 11.36, 36% female, 79% United States) 1 were recruited from Amazon's MTurk and provided monetary compensation. Prior research has supported the validity of findings obtained from MTurk participants, and we applied exclusion guidelines from these sources to ensure sufficient data quality ( Barends & de Vries, 2019 ; Buchheit, Dalton, Pollard, & Stinson, 2019 ). Only participants that had completed more than 50 MTurk tasks with greater than 95% lifetime approval were included.…”
Section: Methodsmentioning
confidence: 99%
“… Note. Sources used to derive recommendations: 1 Antin and Shaw (2012); 2 Arechar, Gächter, and Molleman (2018); 3 Barends and de Vries (2019); 4 Bederson and Quinn (2011); 5 Behrend, Sharek, Meade, and Wiebe (2011); 6 Bergvall-Kåreborn and Howcroft (2014); 7 Brawley and Pury (2016); 8 Buchanan and Scofield (2018); 9 Buhrmester, Talaifar, and Gosling (2018); 10 Chandler, Mueller, and Paolacci (2014); 11 Chandler, Paolacci, Peer, Mueller, and Ratliff (2015); 12 Cheung, Burns, Sinclair, and Sliter (2017); 13 Clifford and Jerit (2014); 14 Deng, Joshi, and Galliers (2016); 15 Feitosa, Joseph, and Newman (2015); 16 Fieseler, Bucher, and Hoffmann (2017); 17 Gleibs (2017); 18 Goodman, Cryder, and Cheema (2013); 19 Hydock (2018); 20 Kan and Drummey (2018); 21 Litman, Robinson, and Rosenzweig (2015); 22 Mummolo and Peterson (2019); 23 Necka, Cacioppo, Norman, and Cacioppo (2016); 24 Wessling, Huber, and Netzer (2017); 25 Zhou and Fishbach (2016). HIT = human intelligence task. …”
Section: Literature Reviewmentioning
confidence: 99%
“…Many MTurker responses are unusable due to high attrition rates and MTurker inattention. Therefore, in addition to the sample size determined through a power analysis, it is useful to collect data from at least an additional 15% to 30% of MTurkers (Sprouse, 2011) to compensate for participant attrition and failure to pass attention and compliance checks (Barends & de Vries, 2019; Zhou & Fishbach, 2016).…”
Section: Recommendationsmentioning
confidence: 99%
See 1 more Smart Citation
“…We excluded participants who failed our pre-registered attention checks, but the data quality still appeared poor compared to the undergraduate sample included in Study 1 and when compared to samples reported in prior research using similar measures (Siegelman et al, 2018). There is some suggestion in the literature that attention checks can be limited in effectiveness (Chandler et al, 2014;Hauser & Scharz, 2016) and that other statistical approaches may be needed to identify engaged participants (e.g., Barends & de Vries, 2019;Dunn et al, 2018). Indeed, it is highly plausible that the seeming passiveness of the task (i.e., clicking on images as the appeared) rendered our experiment susceptible to participant distraction.…”
Section: Emotion and Statistical Learningmentioning
confidence: 99%