Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue 2021
DOI: 10.18653/v1/2021.codi-sharedtask.6
|View full text |Cite
|
Sign up to set email alerts
|

Adapted End-to-End Coreference Resolution System for Anaphoric Identities in Dialogues

Abstract: We present an effective system adapted from the end-to-end neural coreference resolution model, targeting on the task of anaphora resolution in dialogues. Three aspects are specifically addressed in our approach, including the support of singletons, encoding speakers and turns throughout dialogue interactions, and knowledge transfer utilizing existing resources. Despite the simplicity of our adaptation strategies, they are shown to bring significant impact to the final performance, with up to 27 F1 improvement… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…Model For all experiments, we use the higherorder inference (HOI) coreference resolution model (Xu and Choi, 2020), modified slightly to predict singleton clusters (Xu and Choi, 2021). Given a document, HOI encodes texts with an encoder and enumerates all possible spans to detect mentions.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Model For all experiments, we use the higherorder inference (HOI) coreference resolution model (Xu and Choi, 2020), modified slightly to predict singleton clusters (Xu and Choi, 2021). Given a document, HOI encodes texts with an encoder and enumerates all possible spans to detect mentions.…”
Section: Methodsmentioning
confidence: 99%
“…Coreference resolution in dialogue has recently reemerged as an area of research, with multiple datasets created and annotated for coreference resolution (Li et al (2016), Khosla et al (2021), more in Table 1) and the development of dialogue-specific models (Xu and Choi, 2021;Kobayashi et al, 2021;Kim et al, 2021). The datasets can be broadly categorized into transcripts of spoken conversations (e.g.…”
Section: Dialogue Coreference Resolutionmentioning
confidence: 99%
“…pronouns are not considered in DocRED). Second, we support prediction of the singleton entity (entity with only one mention) by optimizing mention scores as suggested by Xu and Choi (2021). Full model details are described in Appendix A.1.…”
Section: Baselinementioning
confidence: 99%
“…pronouns are not annotated. In addition, we support predicting the singleton entity (entity with only one mention) in the same way as Xu and Choi (2021), by keeping all mention candidates whose mention scores > 0 regardless they co-refer with other mentions or not. Thereby a binary crossentropy optimization on mention scores is added in the training loss.…”
Section: A Appendixmentioning
confidence: 99%
“…Designed for identity anaphora resolution, these models were also adapted for bridging and discourse deixis resolution. Examples of span-based models submitted for CCST 2021 include systems by Kobayashi et al (2021), Renner et al (2021), Xu and Choi (2021). Other participants presented different approaches.…”
Section: Introductionmentioning
confidence: 99%