Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1462
|View full text |Cite
|
Sign up to set email alerts
|

GECOR: An End-to-End Generative Ellipsis and Co-reference Resolution Model for Task-Oriented Dialogue

Abstract: Ellipsis and co-reference are common and ubiquitous especially in multi-turn dialogues. In this paper, we treat the resolution of ellipsis and co-reference in dialogue as a problem of generating omitted or referred expressions from the dialogue context. We therefore propose a unified end-to-end Generative Ellipsis and CO-reference Resolution model (GECOR) in the context of dialogue. The model can generate a new pragmatically complete user utterance by alternating the generation and copy mode for each user utte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
58
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 45 publications
(58 citation statements)
references
References 23 publications
0
58
0
Order By: Relevance
“…To the best of our knowledge, there has been no large-scale dialogue datasets with linguistic annotations aiming at ubiquitous discourse phenomena (e.g., ellipsis and coreference) in dialogue. Although some recent works have proposed datasets with utterance completion annotation for ellipsis or coreference in dialogue (Quan et al, 2019;, these datasets are at small scale and with simple dialogue goals. No dialogue datasets provide annotations of coreference clusters.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…To the best of our knowledge, there has been no large-scale dialogue datasets with linguistic annotations aiming at ubiquitous discourse phenomena (e.g., ellipsis and coreference) in dialogue. Although some recent works have proposed datasets with utterance completion annotation for ellipsis or coreference in dialogue (Quan et al, 2019;, these datasets are at small scale and with simple dialogue goals. No dialogue datasets provide annotations of coreference clusters.…”
Section: Related Workmentioning
confidence: 99%
“…CrossWOZ involves 5 domains and dialogue goal descriptions for the domain taxi and metro are relatively simple than those from other domains. Neither MultiWOZ nor CrossWOZ provide linguistic annotations to capture discourse phenomena which are ubiquitous in multi-turn dialogues and are important in dialogue modeling (Quan et al, 2019;Rastogi et al, 2019b;Zhang et al, 2019a) In order to alleviate the aforementioned issues, we propose RiSAWOZ, a large-scale Chinese multi-domain Wizard-of-Oz task-oriented dialogue dataset with rich semantic annotations. Compared with existing datasets (particularly MultiWOZ and CrossWOZ), our contributions can be summarized as follows:…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…From theoretical aspects, various dialogue structures have been studied, including discourse structure (Stent, 2000;Asher et al, 2003), speech act (Austin, 1962;Searle, 1969) and common grounding (Clark, 1996;Lascarides and Asher, 2009). In dialogue system engineering, various linguistic structures have been considered and applied, including syntactic dependency (Davidson et al, 2019), predicate-argument structure (PAS) (Yoshino et al, 2011), ellipsis (Quan et al, 2019;Hansen and Søgaard, 2020), intent recognition (Silva et al, 2011;Shi et al, 2016), semantic representation/parsing (Mesnil et al, 2013;Gupta et al, 2018) and frame-based dialogue state tracking (Williams et al, 2016;El Asri et al, 2017). However, most prior work focus on dialogues where information is not grounded in external, perceptual modality such as vision.…”
Section: Related Workmentioning
confidence: 99%
“…CR is beneficial for improving many downstream NLP tasks such as question answering (Dasigi et al, 2019), dialog systems (Quan et al, 2019), entity linking (Kundu et al), and opinion mining (Nicolov et al, 2008). Particularly, in opinion mining tasks (Liu, 2012;Wang et al, 2016;Zhang et al, 2018;Ma et al, 2020), Nicolov et al (2008) reported performance improves by 10% when CR is used.…”
Section: Introductionmentioning
confidence: 99%