Proceedings of the 16th ACM Conference on Recommender Systems 2022
DOI: 10.1145/3523227.3546767
|View full text |Cite
|
Sign up to set email alerts
|

Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
59
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 176 publications
(59 citation statements)
references
References 46 publications
0
59
0
Order By: Relevance
“…Memory Network [10], Self-Attention [25,40], GCN [5,46], Machine Reasoning [7,24,38] and Foundation Models [19] to improve the model expressiveness, so that it can better capture the abundant and complex information from user/item features and interaction histories. The key insight behind all these methods is to accurately encode the long-range histories, but this paradigm does not optimize the long-term user rewards.…”
Section: Related Work 41 Sequential Recommendation and Session-based ...mentioning
confidence: 99%
“…Memory Network [10], Self-Attention [25,40], GCN [5,46], Machine Reasoning [7,24,38] and Foundation Models [19] to improve the model expressiveness, so that it can better capture the abundant and complex information from user/item features and interaction histories. The key insight behind all these methods is to accurately encode the long-range histories, but this paradigm does not optimize the long-term user rewards.…”
Section: Related Work 41 Sequential Recommendation and Session-based ...mentioning
confidence: 99%
“…For instance, LMRec-Sys [64] utilizes prompts to reconstitute some recommendation tasks as multi-token cloze tasks, aiming to address zero-shot and data efficiency issues. P5 [19] is the first attempt to integrate different recommendation tasks within a shared conditional language generation framework (i.e., T5 [42]). To effectively convert user interaction data into text sequences, a variety of prompts are desgined to accomodate the specific characteristics of each recommendation task.…”
Section: Language Model For Recommendationmentioning
confidence: 99%
“…In numerical evaluations, we employ Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) for rating prediction. And we adopt top-k Hit Ratio (HR@k), top-k Normalized Discounted Cumulative Gain (NDCG@k) for sequential recommendation and direct recommendation which are widely used in related works [19,67]. Specifically, we report results on HR@{1,5,10}, NCGG@{5,10} for evaluation.…”
Section: Metricsmentioning
confidence: 99%
See 2 more Smart Citations