Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events 2020
DOI: 10.18653/v1/2020.nuse-1.1
|View full text |Cite
|
Sign up to set email alerts
|

New Insights into Cross-Document Event Coreference: Systematic Comparison and a Simplified Approach

Abstract: Cross-Document Event Coreference (CDEC) is the task of finding coreference relationships between events in separate documents, most commonly assessed using the Event Coreference Bank+ corpus (ECB+). At least two different approaches have been proposed for CDEC on ECB+ that use only event triggers, and at least four have been proposed that use both triggers and entities. Comparing these approaches is complicated by variation in the systems' use of gold vs. computed labels, as well as variation in the document c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(17 citation statements)
references
References 18 publications
0
17
0
Order By: Relevance
“…Finally, the contribution of the multi-perspective cosine similarity underscores the importance of cosine similarity as observed by Cremisini and Finlayson (2020). These ablations, including on the importance of document-level information (− CLS) suggest new directions for token and docu- ment representations in coreference.…”
Section: Feature Ablationmentioning
confidence: 82%
See 2 more Smart Citations
“…Finally, the contribution of the multi-perspective cosine similarity underscores the importance of cosine similarity as observed by Cremisini and Finlayson (2020). These ablations, including on the importance of document-level information (− CLS) suggest new directions for token and docu- ment representations in coreference.…”
Section: Feature Ablationmentioning
confidence: 82%
“…Due to the high-quality document clustering 3 , we only observe a drop of ∼1 CoNLL F1 point when using these predicted clusters, compared to the gold document clusters. However, we note that such a small decrease relies on the quality of the clustering, as shown by the larger gap (3 F1 points) observed by Cremisini and Finlayson (2020) with less accurate clusters.…”
Section: Coreference Resolutionmentioning
confidence: 85%
See 1 more Smart Citation
“…The existing literature on supervised event coreference resolution primarily focuses on designing pairwise classifier based on the surface linguistic features such as lexical features comprising of lemma and part-of-speech tag similarity of event words (Bejan and Harabagiu, 2010;Lee et al, 2012;Liu et al, 2014;Yang et al, 2015;Cremisini and Finlayson, 2020), argument overlap McConky et al, 2012;Sangeetha and Arock, 2012;Bejan and Harabagiu, 2014;Yang et al, 2015;Choubey and Huang, 2017), semantic similarity based on lexical resources such as wordnet (Bejan and Harabagiu, 2010;Liu et al, 2014;Yu et al, 2016) and word embeddings (Yang et al, 2015;Choubey and Huang, 2017;Kenyon-Dean et al, 2018;Barhom et al, 2019;Zuo et al, 2019;Pandian et al, 2020;Sahlani et al, 2020;, and discourse features such as token and sentence distance (Liu et al, 2014;Cybulska and Vossen, 2015). The resulting classifier is used to cluster event mentions.…”
Section: Related Workmentioning
confidence: 99%
“…Cross-document coreference resolution of entities and events (CDCR) is an increasingly important problem, as downstream tasks that benefit from coreference annotations -such as question answering, information extraction, and summarization -begin interpreting multiple documents simultaneously. Yet the number of candidate mentions across documents makes evaluating the full 1 Code is available at https://github.com/ Helw150/event_entity_coref_ecb_plus n 2 pairwise comparisons intractable (Cremisini and Finlayson, 2020). For single-document coreference, the search space is pruned with simple recency-based heuristics, but there is no natural corollary to recency with multiple documents.…”
Section: Introductionmentioning
confidence: 99%