2021
DOI: 10.1145/3479531
|View full text |Cite
|
Sign up to set email alerts
|

On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices

Abstract: Crowdsourcing is being increasingly adopted as a platform to run studies with human subjects. Running a crowdsourcing experiment involves several choices and strategies to successfully port an experimental design into an otherwise uncontrolled research environment, e.g., sampling crowd workers, mapping experimental conditions to micro-tasks, or ensure quality contributions. While several guidelines inform researchers in these choices, guidance of how and what to report from crowdsourcing experiments has been l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 73 publications
0
9
0
Order By: Relevance
“…In both cases above, it is impossible to distinguish whether pre-screening and eligibility checks took place but were simply not reported (and therefore studies were coded as not including these steps). Future research may benefit from clear reporting guidelines for crowdsourced studies (see Ramírez et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…In both cases above, it is impossible to distinguish whether pre-screening and eligibility checks took place but were simply not reported (and therefore studies were coded as not including these steps). Future research may benefit from clear reporting guidelines for crowdsourced studies (see Ramírez et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…Initial steps have been taken towards defining a taxonomy of relevant attributes to report on crowdsourcing studies, such as the employed crowd, the task shown to the workers, the applied quality control mechanisms, and the experimental design (Ramírez et al 2020;Ramírez et al 2021). We believe that cognitive biases are an additional factor to consider in reports on crowdsourcing studies.…”
Section: Discussionmentioning
confidence: 99%
“…Specifically for crowdsourcing annotations, Ramírez et al (2020) proposed a set of guidelines for reporting crowdsourcing experiments to better account for reproducibility purposes. Ramírez et al (2021) then followed up on this work by proposing a checklist that requesters can use to comprehensively report on their crowdsourced data sets. This body of research aligns with and facilitates current efforts towards more trustworthy artificial intelligence through better documentation (Arnold et al 2019;Stoyanovich and Howe 2019).…”
Section: Quality In Crowdsourced Annotationsmentioning
confidence: 99%
“…The first, by Gebru et al [14], pertains to the effective documentation of machine learning datasets, supporting the transparency and reproducibility of their creation process. The second, by Ramirez et al [31], pertains to the detailing of crowdsourcing experiments to guarantee clarity and repeatability. It ensures the impact of task design, data processing, and other factors on our conclusions, as well as their validity, can be assessed.…”
Section: Wdv: An Annotated Wikidata Verbalisation Datasetmentioning
confidence: 99%