Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1061
|View full text |Cite
|
Sign up to set email alerts
|

Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders

Abstract: While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
601
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 614 publications
(606 citation statements)
references
References 35 publications
5
601
0
Order By: Relevance
“…build a VAE for text generation and design a cost function that encourages interpretability of the latent variables. Zhang et al (2016), Serban et al (2016) and Zhao et al (2017) apply VAE to sequence-to-sequence problems, improving over deterministic alternatives. Chen et al (2016) propose a hybrid model combining autoregressive convolutional layers with the VAE.…”
Section: Related Workmentioning
confidence: 99%
“…build a VAE for text generation and design a cost function that encourages interpretability of the latent variables. Zhang et al (2016), Serban et al (2016) and Zhao et al (2017) apply VAE to sequence-to-sequence problems, improving over deterministic alternatives. Chen et al (2016) propose a hybrid model combining autoregressive convolutional layers with the VAE.…”
Section: Related Workmentioning
confidence: 99%
“…To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Vari-ational Autoencoder (VAE) based models have shown great potential in modeling the one-tomany problem and generate diversified inferences (Bowman et al, 2015;Zhao et al, 2017).…”
Section: Corresponding Authormentioning
confidence: 99%
“…In addition to image generation, CVAE has been successfully applied to some NLP tasks. For example, Zhao et al (2017) apply CVAE to dialog generation, while Guu et al (2018) use CVAE for sentence generation.…”
Section: Conditional Variational Autoencodermentioning
confidence: 99%