2023
DOI: 10.1609/aaai.v37i11.26604
|View full text |Cite
|
Sign up to set email alerts
|

Disentangled CVAEs with Contrastive Learning for Explainable Recommendation

Abstract: Modern recommender systems are increasingly expected to provide informative explanations that enable users to understand the reason for particular recommendations. However, previous methods struggle to interpret the input IDs of user--item pairs in real-world datasets, failing to extract adequate characteristics for controllable generation. To address this issue, we propose disentangled conditional variational autoencoders (CVAEs) for explainable recommendation, which leverage disentangled latent preference fa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…However, LLMs face the hallucination problem resulting in generating low-quality explanations. Some studies address part of low-quality problems, such as questionable review data and personalization, by controlled text generation (Hada and Shevade 2021), personalized variational autoencoder (Cai and Cai 2022;Wang et al 2023), and retrieval model (Xie et al 2023). So far, there is no research that comprehensively addresses the low-quality problems of explanations generated by LLM-based ER models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, LLMs face the hallucination problem resulting in generating low-quality explanations. Some studies address part of low-quality problems, such as questionable review data and personalization, by controlled text generation (Hada and Shevade 2021), personalized variational autoencoder (Cai and Cai 2022;Wang et al 2023), and retrieval model (Xie et al 2023). So far, there is no research that comprehensively addresses the low-quality problems of explanations generated by LLM-based ER models.…”
Section: Related Workmentioning
confidence: 99%
“…Following the previous study Chen 2020, 2021;Wang et al 2023), each dataset is randomly divided into training, validation, and testing sets with a ratio of 8:1:1. We repeat all experiments 5 times independently, with each iteration involving a re-division of the dataset.…”
Section: Experimental Settingsmentioning
confidence: 99%