Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-srw.18
|View full text |Cite
|
Sign up to set email alerts
|

Joint Detection and Coreference Resolution of Entities and Events with Document-level Context Aggregation

Abstract: Constructing knowledge graphs from unstructured text is an important task that is relevant to many domains. Most previous work focuses on extracting information from sentences or paragraphs, due to the difficulty of analyzing longer contexts. In this paper we propose a new jointly trained model that can be used for various information extraction tasks at the document level. The tasks performed in this paper are entity and event identification, typing, and coreference resolution. In order to improve entity and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…However, as error propagation problems are present in these works, the accuracy of downstream subtasks will be impacted. Thus, more and more joint models have been proposed to solve this problem [ 18 , 19 , 20 ]. Chen et al proposed the dynamic multi-pooling CNN model to extract semantic features [ 9 ].…”
Section: Related Workmentioning
confidence: 99%
“…However, as error propagation problems are present in these works, the accuracy of downstream subtasks will be impacted. Thus, more and more joint models have been proposed to solve this problem [ 18 , 19 , 20 ]. Chen et al proposed the dynamic multi-pooling CNN model to extract semantic features [ 9 ].…”
Section: Related Workmentioning
confidence: 99%
“…Coreference resolution is the task of identifying text spans that refer to the same entities and grouping these spans into coreference chains (clusters). The task has an impact on the performance of various natural language applications, including information extraction (Kriman and Heng, 2021), question answering (Bhattacharjee et al, 2020), and text summarization (Li et al, 2021;Steinberger et al, 2007).…”
Section: Introductionmentioning
confidence: 99%
“…Early neural methods focus on obtaining trigger representations by various encoders and then manually constructing matching features (Krause et al, 2016;Nguyen et al, 2016), while recent studies integrate event compatibility into judgments using well-designed model structures (Huang et al, 2019;Lai et al, 2021;Lu and Ng, 2021a) or directly incorporate argument information into event modeling (Zeng et al, 2020;Tran et al, 2021), alleviating noise brought by wrongly extracted or empty event slots. Other work (Kriman and Ji, 2021;Xu et al, 2022) learns ECR-aware event representations through contrastive learning or multi-level modeling. However, at least two limitations exist in the above studies.…”
Section: Introductionmentioning
confidence: 99%
“…Especially for coreferential event pairs with unbalanced information, the learned embeddings may significantly differ if one event has rich arguments while the other has only a vague trigger. To alleviate this problem, Kenyon-Dean et al (2018) and Kriman and Ji (2021) constrain event modeling by attraction and repulsion losses, making coreferential events have similar representations. However, this approach still performs event modeling and coreference discrimination separately, and Xu et al (2022) find that appropriate tensor matching can achieve similar effects.…”
Section: Introductionmentioning
confidence: 99%