Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1375
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Dialogue Learning

Abstract: The sequential order of utterances is often meaningful in coherent dialogues, and the order changes of utterances could lead to lowquality and incoherent conversations. We consider the order information as a crucial supervised signal for dialogue learning, which, however, has been neglected by many previous dialogue systems. Therefore, in this paper, we introduce a self-supervised learning task, inconsistent order detection, to explicitly capture the flow of conversation in dialogues. Given a sampled utterance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(39 citation statements)
references
References 36 publications
0
39
0
Order By: Relevance
“…As a result, our model can provide high-quality responses at a low cost. Before us, there have been a few studies on learning a primary task with auxiliary ones (Rei and Yannakoudakis, 2017;Yu and Jiang, 2016;Ding et al, 2017;Trinh et al, 2018;Mehri et al, 2019;Wu et al, 2019). The work is unique in that through extensive empirical studies, we verified that a simple structure learned with auxiliary tasks can work as well as deep architectures in dialogue generation.…”
Section: Related Workmentioning
confidence: 66%
See 1 more Smart Citation
“…As a result, our model can provide high-quality responses at a low cost. Before us, there have been a few studies on learning a primary task with auxiliary ones (Rei and Yannakoudakis, 2017;Yu and Jiang, 2016;Ding et al, 2017;Trinh et al, 2018;Mehri et al, 2019;Wu et al, 2019). The work is unique in that through extensive empirical studies, we verified that a simple structure learned with auxiliary tasks can work as well as deep architectures in dialogue generation.…”
Section: Related Workmentioning
confidence: 66%
“…(2) VHRED 3 : an extension of HRED that factorizes response generation with latent variables (Serban et al, 2017); (3) HRAN 4 : hierarchical encoder-decoder equipped with a hierarchical attention mechanism (Xing et al, 2018); (4) ReCoSa 5 : a hierarchical transformer-based model that exhibits state-of-the-art performance on benchmarks ; and (5) SSN: a very recent study on enhancing dialogue generation learning with self-supervision signals extracted from utterance order (Wu et al, 2019).…”
Section: Baselinesmentioning
confidence: 99%
“…In contrast to them, we take a cross-domain QA framework that trains the QA model with source domain data (virtual-world data) and generates answers with target domain data (real-world data.) Another setting of natural answer generation is the setting of multiturn QA (i.e., dialogue) [60], [61], which generates answer responses based on a given context and a history, including past questions and answers. Our story-based QA can be viewed as a special case of single-turn dialogue, which does not use any past questions and answers.…”
Section: Natural Answer Generationmentioning
confidence: 99%
“…Shi et al (2019) used variational RNN to extract latent dialogue structure and applied it to dialogue policy learning. Wu et al (2019b) introduced a self-supervised learning task, inconsistent order detection, to explicitly capture the flow of conversation in dialogues. Jin et al (2018) use unlabeled data to train probabilistic distributions over the vocabulary space as dialogue states for neural dialogue generation.…”
Section: Related Workmentioning
confidence: 99%