2017
DOI: 10.48550/arxiv.1701.03185
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generating High-Quality and Informative Conversation Responses with Sequence-to-Sequence Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(20 citation statements)
references
References 3 publications
0
20
0
Order By: Relevance
“…And whether a model can generate diverse (Xu et al, 2018;Baheti et al, 2018), coherent (Li et al, 2016bTian et al, 2017;Bosselut et al, 2018;Adiwardana et al, 2020), informative (Shao et al, 2017;Lewis et al, 2017;Ghazvininejad et al, 2017;Young et al, 2017;Zhao et al, 2019) and knowledge-fused (Hua et al, 2020;Zhao et al, 2020;He et al, 2020) responses or not has become metrics to evaluate a dialog generation model. However, the mainly researches described above are developed on textual only and the development of multimodal dialog generation is relatively slow since the lack of large-scale datasets.…”
Section: Dialog Generationmentioning
confidence: 99%
“…And whether a model can generate diverse (Xu et al, 2018;Baheti et al, 2018), coherent (Li et al, 2016bTian et al, 2017;Bosselut et al, 2018;Adiwardana et al, 2020), informative (Shao et al, 2017;Lewis et al, 2017;Ghazvininejad et al, 2017;Young et al, 2017;Zhao et al, 2019) and knowledge-fused (Hua et al, 2020;Zhao et al, 2020;He et al, 2020) responses or not has become metrics to evaluate a dialog generation model. However, the mainly researches described above are developed on textual only and the development of multimodal dialog generation is relatively slow since the lack of large-scale datasets.…”
Section: Dialog Generationmentioning
confidence: 99%
“…Another problem of current commenting systems arises from the limitation of the Seq2Seq framework (Sutskever et al, 2014), which has been known to suffer from generating dull and responses that are irrelevant to the input articles (Li et al, 2015;Wei et al, 2019;Shao et al, 2017). As shown in Figure 1, the Seq2Seq baseline generates I love this movie for the input article, despite the fact that Ode of joy is not a movie, but a TV series.…”
Section: Angermentioning
confidence: 99%
“…Once the barycenter p was computed, the result was fed into a beam search (beam size B = 5), whose output, in turn, was then given to the captioner's LSTM and the process continued until a stop symbol (EOS) was generated. In order to exploit the controllable entropy of W. barycenter via the entropic regualrization parameter ε, we also decode using randomized Beam search of (Shao et al, 2017), where instead of maintaining the top k values, we sample D candidates in each beam. The smoothness of the barycenter in semantic clusters and its controllable entropy promotes diversity in the resulting captions.…”
Section: Image Captioningmentioning
confidence: 99%