Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.356
|View full text |Cite
|
Sign up to set email alerts
|

Constrained Multi-Task Learning for Event Coreference Resolution

Abstract: We propose a neural event coreference model in which event coreference is jointly trained with five tasks: trigger detection, entity coreference, anaphoricity determination, realis detection, and argument extraction. To guide the learning of this complex model, we incorporate cross-task consistency constraints into the learning process as soft constraints via designing penalty functions. In addition, we propose the novel idea of viewing entity coreference and event coreference as a single coreference task, whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
2

Relationship

2
8

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…In light of the above discussion, we present an empirical analysis of our state-of-the-art spanbased event coreference resolver (Lu and Ng, 2021) with the goal of gaining insights into its behavior. We believe that our analysis will not only provide the general NLP audience with a better understanding of the strengths and weaknesses of span-based event coreference models, but also provide coreference researchers with directions for future work.…”
Section: Introductionmentioning
confidence: 99%
“…In light of the above discussion, we present an empirical analysis of our state-of-the-art spanbased event coreference resolver (Lu and Ng, 2021) with the goal of gaining insights into its behavior. We believe that our analysis will not only provide the general NLP audience with a better understanding of the strengths and weaknesses of span-based event coreference models, but also provide coreference researchers with directions for future work.…”
Section: Introductionmentioning
confidence: 99%
“…We compare our proposed CorefPrompt with the following strong baselines under the same evaluation settings: (i) the joint model Lu&Ng2021 (Lu and Ng, 2021b), which jointly models six related event and entity tasks, and (ii) the pairwise model Xu2022 (Xu et al, 2022), which introduces a document-level event encoding and event topic model. In addition, we build two pairwise baselines, BERT and RoBERTa, that utilize the popular BERT/RoBERTa model as the encoder.…”
Section: Resultsmentioning
confidence: 99%
“…However, with the multi-output model, we can also identify the next group the user will be with 58% accuracy and the next activity the user will perform with 99% accuracy. Further optimization via advanced LSTM cells that contain task-specific parameters for task-specific learning can improve the model's performance [40].…”
Section: Modelmentioning
confidence: 99%