Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.426
|View full text |Cite
|
Sign up to set email alerts
|

Document-level Entity-based Extraction as Template Generation

Abstract: Document-level entity-based extraction (EE), aiming at extracting entity-centric information such as entity roles and entity relations, is key to automatic knowledge acquisition from text corpora for various domains. Most document-level EE systems build extractive models, which struggle to model long-term dependencies among entities at the document level. To address this issue, we propose a generative framework for two document-level EE tasks: role-filler entity extraction (REE) and relation extraction (RE). W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(18 citation statements)
references
References 8 publications
(15 reference statements)
0
18
0
Order By: Relevance
“…Generation-based structured prediction. Several works have demonstrated the great success of generation-based models on monolingual structured prediction tasks, including named entity recognition (Yan et al, 2021), relation extraction (Huang et al, 2021b;Paolini et al, 2021), and event extraction (Du et al, 2021;Hsu et al, 2021;Lu et al, 2021). Yet, as mentioned in Section 1, their designed generating targets are language-dependent.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Generation-based structured prediction. Several works have demonstrated the great success of generation-based models on monolingual structured prediction tasks, including named entity recognition (Yan et al, 2021), relation extraction (Huang et al, 2021b;Paolini et al, 2021), and event extraction (Du et al, 2021;Hsu et al, 2021;Lu et al, 2021). Yet, as mentioned in Section 1, their designed generating targets are language-dependent.…”
Section: Related Workmentioning
confidence: 99%
“…In this setting, the model is trained on the examples in the source languages and directly tested on the instances in the target languages. Recently, generation-based models 1 have shown strong performances on monolingual structured prediction tasks (Yan et al, 2021;Huang et al, 2021b;Paolini et al, 2021), including EAE Hsu et al, 2021). These works fine-tune pre-trained generative language models to generate outputs following designed templates such that the final predictions can be easily decoded from the outputs.…”
Section: Introductionmentioning
confidence: 99%
“…We also develop a new method for text-to- The approach to IE based on seq2seq has already been proposed. Methods for conducting individual tasks of relation extraction (Zeng et al, 2018;Nayak and Ng, 2020;Huang et al, 2021), named entity recognition (Chen and Moschitti, 2018;Yan et al, 2021), event extraction Lu et al, 2021) and role-filler entity extraction (Du et al, 2021;Huang et al, 2021) have been developed. Methods for jointly performing multiple tasks of named entity recognition, relation extraction, and event extraction have also been devised (Paolini et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…One advantage is that one can employ a single model to extract multiple types of information. Results show that this approach works better than or equally well as the traditional approach of language understanding, in RE (Zeng et al, 2018;Nayak and Ng, 2020), NER (Chen and Moschitti, 2018;Yan et al, 2021), EE Lu et al, 2021) and REE (Du et al, 2021;Huang et al, 2021). Methods that jointly perform multiple tasks including NER, RE, and EE have also been devised (Paolini et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Paolini et al (2021) use natural language to encode sentence-level relations but mapping the output text to the input arguments is not trivial and requires an alignment algorithm. Huang et al (2021) formulate relation extraction as a template generation problem but their approach requires a complex cross-attention guided copy mechanism. We explore sentence-as well as cross-sentence relations and encode relations in a structured and humanreadable form in which the relation arguments can be easily mapped to the reference entities in the input.…”
Section: Introductionmentioning
confidence: 99%