2022
DOI: 10.1016/j.knosys.2022.109146
|View full text |Cite
|
Sign up to set email alerts
|

Heterogenous affinity graph inference network for document-level relation extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…We use recent competitive models as baselines for comparison, including Coref [ 30 ], SSAN [ 31 ], GAIN [ 20 ], ATLOP [ 32 ], DocuNet [ 33 ], EIDER [ 34 ], SAIS [ 35 ], HAG [ 36 ], and AFLKD [ 23 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We use recent competitive models as baselines for comparison, including Coref [ 30 ], SSAN [ 31 ], GAIN [ 20 ], ATLOP [ 32 ], DocuNet [ 33 ], EIDER [ 34 ], SAIS [ 35 ], HAG [ 36 ], and AFLKD [ 23 ].…”
Section: Methodsmentioning
confidence: 99%
“…HAG [ 36 ] proposes a heterogeneous affinity graph inference network, which utilizes coref-aware relation modeling and a noise suppression mechanism to address the long-distance reasoning challenges in document-level RE.…”
Section: Methodsmentioning
confidence: 99%
“…Xue et al [24] generated a latent multi-view graph using a Gaussian graph generator to capture the possible relationships among tokens. Li et al [7] devised a heterogeneous affinity graph inference network with noise suppression mechanism to build the long-distance reasoning chain in document-level RE.…”
Section: Related Workmentioning
confidence: 99%
“…Conventional works that obtained relational facts within a single sentence (sentence-level) ignored these complex facts across multiple sentences. Over the past few years, researches on the document-level RE [1][2][3][4][5][6][7] provide in-depth insights into the RE task, where transformer-based and graphbased methods are widely applied. All these methods suffer from noise in the text, and a necessary longrange semantic dependency among all mentions is a way around this issue.…”
Section: Introductionmentioning
confidence: 99%