Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-short.44
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying and Avoiding Unfair Qualification Labour in Crowdsourcing

Abstract: Extensive work has argued in favour of paying crowd workers a wage that is at least equivalent to the U.S. federal minimum wage. Meanwhile, research on collecting high quality annotations suggests using a qualification that requires workers to have previously completed a certain number of tasks. If most requesters who pay fairly require workers to have completed a large number of tasks already then workers need to complete a substantial amount of poorly paid work before they can earn a fair wage. Through analy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 19 publications
(18 reference statements)
0
8
0
Order By: Relevance
“…These, for instance, can be requiring a certain percentage of accepted tasks or a certain number of already completed tasks. Kummerfeld (2021) analyzes the impact of these measures on quality and discusses the ethical aspects of requiring a minimum number of tasks. They argue that it forces workers to accept a substantial amount of low-paying tasks to overcome this hurdle.…”
Section: Annotator Managementmentioning
confidence: 99%
See 1 more Smart Citation
“…These, for instance, can be requiring a certain percentage of accepted tasks or a certain number of already completed tasks. Kummerfeld (2021) analyzes the impact of these measures on quality and discusses the ethical aspects of requiring a minimum number of tasks. They argue that it forces workers to accept a substantial amount of low-paying tasks to overcome this hurdle.…”
Section: Annotator Managementmentioning
confidence: 99%
“…Qualification Test. A more elaborate way to identify good annotators is to use (paid) qualification tests (Kummerfeld 2021). Before an interested annotator can participate in the primary annotation process, they must work on a small set of qualification tasks.…”
Section: Annotator Managementmentioning
confidence: 99%
“…We recruit writers via Amazon Mechanical Turk (MTurk). The number of workers who participated in the study is listed in We adapt this qualification following the recommendation of Kummerfeld (2021) to avoid the exploitation of crowdworkers. He demonstrates that imposing these prepared criteria is not fair because crowdworkers need to work on poorly paid tasks to achieve those qualifications in most cases.…”
Section: B Crowdworker Recruitment and Paymentmentioning
confidence: 99%
“…We are not looking at possible sources of social bias, although this issue should be highly relevant to those considering sources to use as training data for applied systems (Li et al, 2020;Parrish et al, 2022). We are using Amazon Mechanical Turk despite its history of sometimes treating workers unfairly (Kummerfeld, 2021), especially in recourse for unfair rejections. We make sure that our own pay and rejection policies are comparable to in-person employment, but acknowledge that our study could encourage others to use Mechanical Turk, and that they might not be so careful.…”
Section: Ethics Statementmentioning
confidence: 99%