2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.241
|View full text |Cite
|
Sign up to set email alerts
|

Ambiguity Helps: Classification with Disagreements in Crowdsourced Annotations

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(24 citation statements)
references
References 23 publications
0
24
0
Order By: Relevance
“…The white noise kernel is expressed as k(x x x, x x x ) = r if x x x = x x x and 0 otherwise, where r is the noise level. In the future, we will look into ways of dealing with the problem of input-dependent noise level (i.e., different degree of human mistakes and disagreements for different design instances) [38,39].…”
Section: Novelty Discoverymentioning
confidence: 99%
“…The white noise kernel is expressed as k(x x x, x x x ) = r if x x x = x x x and 0 otherwise, where r is the noise level. In the future, we will look into ways of dealing with the problem of input-dependent noise level (i.e., different degree of human mistakes and disagreements for different design instances) [38,39].…”
Section: Novelty Discoverymentioning
confidence: 99%
“…Ann ( Paun et al, 2018;Pavlick and Kwiatkowski, 2019) and computer vision (Sharmanska et al, 2016) has leveraged annotator uncertainty to improve modeling. Thus, for our setting, we ask the following research question: RQ1: Is there inherent ambiguity in the language that expresses economic policy uncertainty?…”
Section: Subsetmentioning
confidence: 99%
“…Many researchers have concluded that rather than attempting to eliminate disagreements from annotated corpora, we should preserve them-indeed, some researchers have argued that corpora should aim to collect all distinct interpretations of an expression (Smyth et al, 1994;Poesio and Artstein, 2005;Aroyo and Welty, 2015;Sharmanska et al, 2016;Plank, 2016;Kenyon-Dean et al, 2018;Firman et al, 2018;Pavlick and Kwiatkowski, 2019). Poesio and Artstein (2005) and Recasens et al (2012) suggest that the best way to create resources capturing disagreements is by preserving implicit ambiguity-i.e., having multiple annotators label the items, and then keeping all these annotations, not just an aggregated 'gold standard'.…”
Section: Introductionmentioning
confidence: 99%