Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1199
|View full text |Cite
|
Sign up to set email alerts
|

Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning

Abstract: Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumb… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
150
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 210 publications
(152 citation statements)
references
References 20 publications
2
150
0
Order By: Relevance
“…work focuses on sentence-level RE, i.e., extracting relational facts from a single sentence. In recent years, various neural models have been explored to encode relational patterns of entities for sentence-level RE, and achieve state-of-theart performance (Socher et al, 2012;Zeng et al, 2014Zeng et al, , 2015dos Santos et al, 2015;Xiao and Liu, 2016;Cai et al, 2016;Lin et al, 2016;Wu et al, 2017;Qin et al, 2018;Han et al, 2018a).…”
Section: Introductionmentioning
confidence: 99%
“…work focuses on sentence-level RE, i.e., extracting relational facts from a single sentence. In recent years, various neural models have been explored to encode relational patterns of entities for sentence-level RE, and achieve state-of-theart performance (Socher et al, 2012;Zeng et al, 2014Zeng et al, , 2015dos Santos et al, 2015;Xiao and Liu, 2016;Cai et al, 2016;Lin et al, 2016;Wu et al, 2017;Qin et al, 2018;Han et al, 2018a).…”
Section: Introductionmentioning
confidence: 99%
“…com/tyliupku/soft-label-RE. Table 5: AUC values of previous work and our models, where ATT BL+DSGAN and ATT BL+RL are two models proposed in (Qin et al, 2018a) and (Qin et al, 2018b) respectively, † indicates the baseline result reported in (Qin et al, 2018a,b) and ‡ indicates the baseline result given by our implementation.…”
Section: Pr Curvesmentioning
confidence: 99%
“…DSGAN (Qin et al, 2018a), a GAN-based method, was also used to recognize true positive instances from noisy datasets. To further alleviate the effect of wrong labeling problem, soft-label training algorithm (Liu et al, 2017b), reinforcement learning methods (Feng et al, 2018;Qin et al, 2018b) and additional side information (Vashishth et al, 2018;Wang et al, 2018) have been used. Most recently, a few methods focused on the pre-training embeddings for word tokens and relations including adversarial training (Wu et al, 2017), transfer learning (Liu et al, 2018) and relation decoder (Su et al, 2018).…”
Section: Related Workmentioning
confidence: 99%