2022
DOI: 10.48550/arxiv.2205.01909
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Modeling Task Interactions in Document-Level Joint Entity and Relation Extraction

Abstract: We target on the document-level relation extraction in an end-to-end setting, where the model needs to jointly perform mention extraction, coreference resolution (COREF) and relation extraction (RE) at once, and gets evaluated in an entity-centric way. Especially, we address the two-way interaction between COREF and RE that has not been the focus by previous work, and propose to introduce explicit interaction namely Graph Compatibility (GC) that is specifically designed to leverage task characteristics, bridgi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Thus, [93] introduce Span-BERT, a BERT model pre-trained directly on spans instead of tokens (as is the case of BERT) in text. This model has been successfully used as the backbone to achieve state-of-the-art results in numerous information extraction tasks [94][95][96]. We use span-based models in our work described in Chapters 2, 3 and 4 of this thesis.…”
Section: Span-based Information Extractionmentioning
confidence: 99%
“…Thus, [93] introduce Span-BERT, a BERT model pre-trained directly on spans instead of tokens (as is the case of BERT) in text. This model has been successfully used as the backbone to achieve state-of-the-art results in numerous information extraction tasks [94][95][96]. We use span-based models in our work described in Chapters 2, 3 and 4 of this thesis.…”
Section: Span-based Information Extractionmentioning
confidence: 99%
“…This fundamental NLP task can benefit various applications, such as Information Extraction [3,4], Question Answering [5,6], Machine Translation [7,8], and Summarization [9,10], which are of great research value. Coref requires document-level encoding.…”
Section: Introductionmentioning
confidence: 99%