Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2018
DOI: 10.18653/v1/p18-2112
|View full text |Cite
|
Sign up to set email alerts
|

Personalized Review Generation By Expanding Phrases and Attending on Aspect-Aware Representations

Abstract: In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multip… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
64
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(64 citation statements)
references
References 19 publications
0
64
0
Order By: Relevance
“…A generated review is informative if it can effectively capture the semantic information of the real review. Following (Ni and McAuley, 2018), we examine the aspect coverage of different models. Recall that we have used topic models to tag each sentence with an aspect label (or ID).…”
Section: Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…A generated review is informative if it can effectively capture the semantic information of the real review. Following (Ni and McAuley, 2018), we examine the aspect coverage of different models. Recall that we have used topic models to tag each sentence with an aspect label (or ID).…”
Section: Results and Analysismentioning
confidence: 99%
“…• ExpansionNet (Ni and McAuley, 2018): It uses an encoder-decoder framework to generate personalized reviews by incorporating short phrases (e.g., review summaries, product titles) provided as input and introducing aspect-level information (e.g., aspect words).…”
Section: Methodsmentioning
confidence: 99%
“…However, it has been well perceived, that using only the user and product attributes to generate text is unreasonable, since we expect the model to generate coherent texts using only two vectors. This impossibility is also reported by Dong et al (2017) where they also used sentiment information, and Ni and McAuley (2018) where they additionally used learned aspects and a short version of the text to be able to generate well-formed texts. Nevertheless, the results in this experiment agree to the results above regarding injecting attributes to the attention mechanism; bias-attention performs worse than the random baseline, and CHIM-attention performs the worst among CHIM-based models.…”
Section: Datasetmentioning
confidence: 90%
“…Incorporating user and product attributes to NLP models makes them more personalized and thus user satisfaction can be increased (Baruzzo et al, 2009). Examples of other NLP tasks that use these attributes are text classification (Kim et al, 2019), language modeling (Jaech and Ostendorf, 2018), text generation (Dong et al, 2017;Ni and McAuley, 2018), review summarization (Yang et al, 2018b), machine translation (Michel and Neubig, 2018), and dialogue response generation . On these tasks, the usage of the bias-attention method is frequent since it is trivially easy and there have been no attempts to investigate different possible methods for attribute representation and injection.…”
Section: User/product Attributes For Nlp Tasksmentioning
confidence: 99%
“…Review generation belongs to data-to-text natural language generation [14]. Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, to discover their nuanced opinions about each of its individual aspects [41]. In order to recommend products to users, we need to ultimately predict how users will react to new products.…”
Section: Review Generationmentioning
confidence: 99%