2022
DOI: 10.48550/arxiv.2205.09837
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Summarization as Indirect Supervision for Relation Extraction

Abstract: Relation extraction (RE) models have been challenged by their reliance on training data with expensive annotations. Considering that summarization tasks aim at acquiring concise expressions of synoptical information from the longer context, these tasks naturally align with the objective of RE, i.e., extracting a kind of synoptical information that describes the relation of entity mentions. We present SURE, which converts RE into a summarization formulation. SURE leads to more precise and resource-efficient RE … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…OpenIE Datasets: Given how data-hungry deep learning models are and how costly it is to manually label OpenIE datasets, most OpenIE training sets are weakly labeled using high-confidence extractions from prior OpenIE models to get "silverstandard" labels. For example, the CopyAttention (Cui et al, 2018), SpanOIE (Zhan and Zhao, 2020), and OIE4 (Kolluru et al, 2020b) (Lu et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…OpenIE Datasets: Given how data-hungry deep learning models are and how costly it is to manually label OpenIE datasets, most OpenIE training sets are weakly labeled using high-confidence extractions from prior OpenIE models to get "silverstandard" labels. For example, the CopyAttention (Cui et al, 2018), SpanOIE (Zhan and Zhao, 2020), and OIE4 (Kolluru et al, 2020b) (Lu et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…For backtranslation we use Facebook-FAIR's WMT'19 German-English and English-German models (Ng et al, 2019) and retain only those back translated sentences whose entailment confidence is above 80%. For relation extraction, we use a pretrained SuRE model, a state-of-the-art relation extraction model (Lu et al, 2022) without any additional fine-tuning and keep all relations with confidence above 80%. These confidence thresholds are hyperparameters that may be adjusted.…”
Section: Datasets and Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…Zero-shot RE methods apply indirect supervision and convert RE tasks to other NLP tasks (Levy et al 2017;Obamuyide and Vlachos 2018;Sainz et al 2021;Lu et al 2022). For the zero-shot setting, human annotators do not label any samples but are usually asked to generate relation paraphrase templates.…”
Section: Zero-shot Relation Extractionmentioning
confidence: 99%
“…Recently, indirectly supervised methods have taken advantage of pretrained models for other NLP tasks to solve RE tasks in the zero-shot setting. For example, RE tasks have been reformulated as question answering problems (Levy et al 2017), as natural language inference (NLI) tasks (Obamuyide and Vlachos 2018;Sainz et al 2021), and as text summarization tasks (Lu et al 2022). Performance in the zero-shot setting, however, still has a significant gap, and highly relies on the quality and diversity of relation paraphrase templates.…”
Section: Introductionmentioning
confidence: 99%