Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1046
|View full text |Cite
|
Sign up to set email alerts
|

DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction

Abstract: Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentencelevel true-positive generator. Inspired by Generative Adversarial… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
101
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 152 publications
(101 citation statements)
references
References 21 publications
0
101
0
Order By: Relevance
“…Adversarial training has also been adopted for information extraction (Wu et al, 2017;Hong et al, 2018;Qin et al, 2018;. These adversarial information extraction methods either generate adversarial instances by adding simple noise perturbation to embeddings (Wu et al, 2017;Hong et al, 2018), or mainly adopt models to denoise data and neglect to discover more training instances from raw data (Qin et al, 2018;. Compared with these methods, our adversarial method samples adversarial examples from the real-world data rather than generating pseudo noisy perturbations.…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial training has also been adopted for information extraction (Wu et al, 2017;Hong et al, 2018;Qin et al, 2018;. These adversarial information extraction methods either generate adversarial instances by adding simple noise perturbation to embeddings (Wu et al, 2017;Hong et al, 2018), or mainly adopt models to denoise data and neglect to discover more training instances from raw data (Qin et al, 2018;. Compared with these methods, our adversarial method samples adversarial examples from the real-world data rather than generating pseudo noisy perturbations.…”
Section: Related Workmentioning
confidence: 99%
“…[30] proposes to model the noise caused by wrong labelling problem and show that dynamic transition matrix can effectively characterize the noises. [31,32] propose to use adversarial learning [33] to solve the wrong labelling problem. Instead, [34,35] adopt reinforcement learning to learn to select high-quality data for training.…”
Section: Recent Years Deep Learning Has Achieved Remarkable Success mentioning
confidence: 99%
“…com/tyliupku/soft-label-RE. Table 5: AUC values of previous work and our models, where ATT BL+DSGAN and ATT BL+RL are two models proposed in (Qin et al, 2018a) and (Qin et al, 2018b) respectively, † indicates the baseline result reported in (Qin et al, 2018a,b) and ‡ indicates the baseline result given by our implementation.…”
Section: Pr Curvesmentioning
confidence: 99%