2020
DOI: 10.14778/3407790.3407842
|View full text |Cite
|
Sign up to set email alerts
|

Detecting and preventing confused labels in crowdsourced data

Abstract: Crowdsourcing is a challenging activity for many reasons, from task design to workers' training, identification of low-quality annotators, and many more. A particularly subtle form of error is due to confusion of observations, that is, crowd workers (including diligent ones) that confuse items of a class i with items of a class j , either because they are similar or because the task description has failed to explain the differences. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…Both difficult and ambiguous cases can lead to label confusion. Krivosheev et al ( 2020 ) developed mechanisms to efficiently detect label confusion in classification tasks and demonstrated that alerting workers of the risk of confusion can improve annotation performance.…”
Section: Motivation and Backgroundmentioning
confidence: 99%
“…Both difficult and ambiguous cases can lead to label confusion. Krivosheev et al ( 2020 ) developed mechanisms to efficiently detect label confusion in classification tasks and demonstrated that alerting workers of the risk of confusion can improve annotation performance.…”
Section: Motivation and Backgroundmentioning
confidence: 99%
“…Both difficult and ambiguous cases can lead to label confusion. Krivosheev et al (2020) developed mechanisms to efficiently detect label confusion in classification tasks and demonstrated that alerting workers of the risk of confusion can improve annotation performance.…”
Section: Understanding Disagreementmentioning
confidence: 99%