Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.216
|View full text |Cite
|
Sign up to set email alerts
|

Gradient Imitation Reinforcement Learning for Low Resource Relation Extraction

Abstract: Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce. Existing works either utilize self-training scheme to generate pseudo labels that will cause the gradual drift problem, or leverage metalearning scheme which does not solicit feedback explicitly. To alleviate selection bias due to the lack of feedback loops in existing LRE learning paradigms, we developed a Gradient Imitation Reinforcement Learning method to encourage pseudo label… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(27 citation statements)
references
References 67 publications
0
24
0
Order By: Relevance
“…Foreign Affairs (Austria) Figure 1: An example of limitations of existing methods without considering temporal information relation extraction [14][15][16]28]. However, existing KGs are generally highly incomplete [44].…”
Section: G S G Tmentioning
confidence: 99%
“…Foreign Affairs (Austria) Figure 1: An example of limitations of existing methods without considering temporal information relation extraction [14][15][16]28]. However, existing KGs are generally highly incomplete [44].…”
Section: G S G Tmentioning
confidence: 99%
“…Inspired by the success of contrastive learning in computer vision tasks (He et al, 2020;Li et al, 2021;Caron et al, 2020), instance-wise contrastive learning in information extraction tasks (Peng et al, 2020;Li et al, 2022a), and large pre-trained language models that show great potential to encode meaningful semantics for various downstream tasks (Devlin et al, 2019;Soares et al, 2019;Hu et al, 2021b), we proposed a hierarchical exemplar contrastive learning schema for unsupervised relation extraction. It has the advantages of supervised learning to capture high-level semantics in the relational features instead of exploiting base-level sentence differences to strengthen discriminative power and also keeps the advantage of unsupervised learning to handle the cases where the number of relations is unknown in advance.…”
Section: Related Workmentioning
confidence: 99%
“…To address this problem, (Pham et al, 2020;Wang et al, 2021b;Hu et al, 2021a) propose to utilize the performance of the student model on the held out labeled data as a Meta Learning objective to update the teacher model or improve the pseudo-label generation process. (Hu et al, 2021b) leverage the cosine distance between gradients computed on labeled data and pseudolabeled data as feedback to guide the self-training process. (Mehta et al, 2018) propose to inject span constraints from constituency parsing during selftraining of semantic role labeling.…”
Section: Related Workmentioning
confidence: 99%