Interspeech 2014 2014
DOI: 10.21437/interspeech.2014-478
|View full text |Cite
|
Sign up to set email alerts
|

Cross-language transfer of semantic annotation via targeted crowdsourcing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…; L11: Sympathy and support: Tweets with prayers, thoughts, and emotional support; For the manual annotation, we opted to use Amazon Mechanical Turk (AMT) platform. In crowdsourcing, one of the challenges is to find a large number of qualified workers while filtering out low-quality workers or spammers (Chowdhury et al 2014). To tackle this problem, a typical approach is to use qualification tests followed by a gold standard evaluation (Chowdhury et al 2015).…”
Section: Manual Annotationsmentioning
confidence: 99%
“…; L11: Sympathy and support: Tweets with prayers, thoughts, and emotional support; For the manual annotation, we opted to use Amazon Mechanical Turk (AMT) platform. In crowdsourcing, one of the challenges is to find a large number of qualified workers while filtering out low-quality workers or spammers (Chowdhury et al 2014). To tackle this problem, a typical approach is to use qualification tests followed by a gold standard evaluation (Chowdhury et al 2015).…”
Section: Manual Annotationsmentioning
confidence: 99%
“…If we compare our task with other previous tasks that used a similar metric, we can better understand the complexity of the task and judge the quality of the annotations. Chowdhury et al (2014) worked on the task of semantic annotations of the utterances from conversations. One of the sub-task annotators had to perform was selecting a text span describing a hardware concept.…”
Section: Discussionmentioning
confidence: 99%
“…An alternative agreement measure that does not require the knowledge of true negatives for its calculations is Positive (Specific) Agreement (Fleiss, 1975), which is similar to the widely used F-measure (Hripcsak and Rothschild, 2005). It has previously been shown to be useful in the evaluation of crowdsourced annotations tasks, similar to our's (Stepanov et al, 2018;Chowdhury et al, 2014).…”
Section: Evaluation Of Annotationsmentioning
confidence: 99%
“…For the manual annotation, we opted to use Appen crowdsourcing platform 6 . One of the challenges with crowdsourced annotation is to find a large number of qualified workers while filtering out low-quality workers or spammers (Chowdhury et al, 2015;Chowdhury et al, 2014) the quality of the annotation we followed standard evaluation (Chowdhury et al, 2020b), i.e., we used 150 gold standard test tweets. Based on these gold standard test tweets, each annotator needed to pass at least 70% of the tweets to participate in the annotation task.…”
Section: Manual Annotationmentioning
confidence: 99%