Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2016
DOI: 10.18653/v1/n16-1104
|View full text |Cite
|
Sign up to set email alerts
|

Effective Crowd Annotation for Relation Extraction

Abstract: Can crowdsourced annotation of training data boost performance for relation extraction over methods based solely on distant supervision? While crowdsourcing has been shown effective for many NLP tasks, previous researchers found only minimal improvement when applying the method to relation extraction. This paper demonstrates that a much larger boost is possible, e.g., raising F1 from 0.40 to 0.60. Furthermore, the gains are due to a simple, generalizable technique, Gated Instruction, which combines an interact… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
74
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(75 citation statements)
references
References 13 publications
1
74
0
Order By: Relevance
“…However, the approach generates blocking rules on structured data using random forest, and the method cannot be applied to text data studied in our approach. Crowdsourcing is also applied in relation extraction [24,1]. However, similar to entity matching, most of the works focus on tuplelevel annotation.…”
Section: Related Workmentioning
confidence: 99%
“…However, the approach generates blocking rules on structured data using random forest, and the method cannot be applied to text data studied in our approach. Crowdsourcing is also applied in relation extraction [24,1]. However, similar to entity matching, most of the works focus on tuplelevel annotation.…”
Section: Related Workmentioning
confidence: 99%
“…We conducted the RE experiments in this setting, as this allowed us to directly compare with Liu et al (2016). Thus, we used CS A annotated by our proposed method along with the noisy annotated DS to train the extractor.…”
Section: Resultsmentioning
confidence: 99%
“…Similarly to Liu et al (2016), we restricted our attention to 5 relations between person and location 3 . For both DS and CS, we used the publicly available data provided by Liu et al (2016). Ultimately, 221 crowd workers participated to the task with minimum 2 and maximum 400 annotations per crowd worker.…”
Section: Corporamentioning
confidence: 99%
“…Werling et al (2015) use Bayesian decision theory to minimize human cost and maximize accuracy for named entity recognition. For certain IE tasks, however, human supervision can be very noisy and thus counterproductive, especially from crowds, thus previous work has shown the importance of how to pose tasks for humans in providing labels (Liu et al, 2016), as well as automatically distinguishing simple labeling tasks from expert tasks in crowd-based task assignments (Wang et al, 2017). Our work shares a similar view of human supervision for IE, yet we instead study the impact of the annotation interface on the overall performance.…”
Section: Related Workmentioning
confidence: 98%