Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.761
|View full text |Cite
|
Sign up to set email alerts
|

Knowing False Negatives: An Adversarial Training Method for Distantly Supervised Relation Extraction

Abstract: Distantly supervised relation extraction (RE) automatically aligns unstructured text with relation instances in a knowledge base (KB). Due to the incompleteness of current KBs, sentences implying certain relations may be annotated as N/A instances, which causes the socalled false negative (FN) problem. Current RE methods usually overlook this problem, inducing improper biases in both training and testing procedures. To address this issue, we propose a two-stage approach. First, it finds out possible FN samples… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 26 publications
0
1
0
Order By: Relevance
“…Without resolving this issue, the annotations of the datasets are incomplete. Recent efforts on addressing the false negative problem are from the model perspective Hao et al, 2021), which aims to denoise the false negative data during training. The challenge for these approaches is that both development and test sets can be incomplete at the same time.…”
Section: Introductionmentioning
confidence: 99%
“…Without resolving this issue, the annotations of the datasets are incomplete. Recent efforts on addressing the false negative problem are from the model perspective Hao et al, 2021), which aims to denoise the false negative data during training. The challenge for these approaches is that both development and test sets can be incomplete at the same time.…”
Section: Introductionmentioning
confidence: 99%
“…Huang and Du (2019) proposes collaborative curriculum learning for denoising. Hao et al (2021) adopts adversarial training to filter noisy instances in the dataset. Nayak et al (2021) designs a self-ensemble framework to filter noisy instances despite information loss.…”
Section: Related Workmentioning
confidence: 99%