2019
DOI: 10.48550/arxiv.1911.10484
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context

Abstract: Conversations have an intrinsic one-to-many property, which means that multiple responses can be appropriate for the same dialog context. In task-oriented dialogs, this property leads to different valid dialog policies towards task completion. However, none of the existing task-oriented dialog generation approaches takes this property into account. We propose a Multi-Action Data Augmentation (MADA) framework to utilize the one-to-many property to generate diverse appropriate dialog responses. Specifically, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
40
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(40 citation statements)
references
References 29 publications
0
40
0
Order By: Relevance
“…Many recent work attempt to use end-to-end neural models, such as sequence-to-sequence and GPT-2, to solve taskoriented dialog problems and achieve remarkable results (Wen et al 2016b;Budzianowski et al 2018;Zhang, Ou, and Yu 2019;Peng et al 2020;Ham et al 2020a). In our work, we follow the finetuning strategy of Peng et al (2020) where we concatenate the dialog history and annotations and flatten them into a string, and then use a combination of three objectives to fine-tune GPT-2.…”
Section: Multi-task Fine-tuningmentioning
confidence: 99%
See 4 more Smart Citations
“…Many recent work attempt to use end-to-end neural models, such as sequence-to-sequence and GPT-2, to solve taskoriented dialog problems and achieve remarkable results (Wen et al 2016b;Budzianowski et al 2018;Zhang, Ou, and Yu 2019;Peng et al 2020;Ham et al 2020a). In our work, we follow the finetuning strategy of Peng et al (2020) where we concatenate the dialog history and annotations and flatten them into a string, and then use a combination of three objectives to fine-tune GPT-2.…”
Section: Multi-task Fine-tuningmentioning
confidence: 99%
“…Wen et al (2017); Yang et al (2017); Ham et al (2020b) demonstrate that end-to-end systems outperform the traditional pipeline approaches in task-oriented dialog scenarios. Zhang, Ou, and Yu (2019); Peng et al (2020) focus on the benchmarks of the MultiWoz dataset and achieve top performance.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations