Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1319
|View full text |Cite
|
Sign up to set email alerts
|

Towards Controllable and Personalized Review Generation

Abstract: In this paper, we propose a novel model RevGAN that automatically generates controllable and personalized user reviews based on the arbitrarily given sentimental and stylistic information. RevGAN utilizes the combination of three novel components, including selfattentive recursive autoencoders, conditional discriminators, and personalized decoders. We test its performance on the several real-world datasets, where our model significantly outperforms state-of-the-art generation models in terms of sentence qualit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
21
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(21 citation statements)
references
References 32 publications
0
21
0
Order By: Relevance
“…One of the most popular models is seqGAN [37], which leverages a discriminator and a generator to play a minimax game to generate text. Extensions were developed, such as DP-GAN [36],RevGAN [20], RankGAN [21] and LeakGAN [12]. However, these algorithms do not offer means to control the trade-off between diversity and accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…One of the most popular models is seqGAN [37], which leverages a discriminator and a generator to play a minimax game to generate text. Extensions were developed, such as DP-GAN [36],RevGAN [20], RankGAN [21] and LeakGAN [12]. However, these algorithms do not offer means to control the trade-off between diversity and accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…While review generation is essentially a subtask of D2T, it is relatively understudied than other D2T tasks. Previous models include an encoder-decoder model with attention (Dong et al, 2017), improved by including an objective function for rating accuracy (Sharma et al, 2018;Li and Tuzhilin, 2019), by introducing a hierarchical decoder (Zang and Wan, 2017), by decomposing the decoding stage as coarse-to-fine manner , and by using additional inputs such as user-given summary (Ni and McAuley, 2018) or product description (Li and Tuzhilin, 2019). In this paper, we make performance improvements by proposing a concept of leveraging references, and extensions proposed in the aforementioned literature are orthogonal and thus applicable to improve our models further.…”
Section: Related Workmentioning
confidence: 99%
“…Rational Response Generation: Existing rational response generation models usually rely on knowledge bases, such as open-domain response generation (Han et al 2015;Young et al 2018;Ghazvininejad et al 2018;Liu et al 2018;Tuan, Chen, and Lee 2019;Moon et al 2019), task-oriented response generation (Madotto, Wu, and Fung 2018;Wu, Socher, and Xiong 2019) and question answering (Sun et al 2018;Banerjee et al 2019). Zhou et al (2018b) (Hu et al 2017;Li and Tuzhilin 2019), language models (Ghosh et al 2017) and Seq2Seq models (Xing et al 2017;Xu et al 2019). Keskar et al (2019) trained a Transformer-based conditional language model on a large collection of corpora with control codes that govern style, content, and task-specific behavior.…”
Section: Related Workmentioning
confidence: 99%