ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8682634
|View full text |Cite
|
Sign up to set email alerts
|

Why Do Neural Dialog Systems Generate Short and Meaningless Replies? a Comparison between Dialog and Translation

Abstract: This paper addresses the question: Why do neural dialog systems generate short and meaningless replies? We conjecture that, in a dialog system, an utterance may have multiple equally plausible replies, causing the deficiency of neural networks in the dialog application. We propose a systematic way to mimic the dialog scenario in a machine translation system, and manage to reproduce the phenomenon of generating short and less meaningful sentences in the translation setting, showing evidence of our conjecture.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…1 Overall architecture of the proposed model Fig. 2 A sample architecture for sequnce2sequence neural transliteration [9] This method has been widely used in different natural language processing tasks, such as machine translation [3,1], question answering [11,15], dialog systems [23], and speech recognition [4]. In machine translation, an input sentence, as a sequence of words, is given to the system, and a sentence, again as a sequence of words, is generated in the output.…”
Section: -1-transliteration With Deep Sequence2sequence Modelmentioning
confidence: 99%
“…1 Overall architecture of the proposed model Fig. 2 A sample architecture for sequnce2sequence neural transliteration [9] This method has been widely used in different natural language processing tasks, such as machine translation [3,1], question answering [11,15], dialog systems [23], and speech recognition [4]. In machine translation, an input sentence, as a sequence of words, is given to the system, and a sentence, again as a sequence of words, is generated in the output.…”
Section: -1-transliteration With Deep Sequence2sequence Modelmentioning
confidence: 99%
“…However, they suffer from the problem of generic utterance generation, e.g., always generating "I don't know" (Serban et al, 2016;Li et al, 2016). One possible explanation (Wei et al, 2019) is the high uncertainty in dialog generation. A plausible response is analogous to a "mode" of a continuous distribution, and the response distribution is thus multimodal.…”
Section: Introductionmentioning
confidence: 99%
“…A less well known issue is the template-like generation of neural NLG systems (Wei et al, 2019). Figure 1 highlights this issue; neural NLG systems (TGen and Slug2Slug) are far less diverse than the training data (E2E Dataset) in their usage of surface forms that express an attribute.…”
Section: Introductionmentioning
confidence: 99%