Companion Proceedings of the Web Conference 2022 2022
DOI: 10.1145/3487553.3524208
|View full text |Cite
|
Sign up to set email alerts
|

PEAR: Personalized Re-ranking with Contextualized Transformer for Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 18 publications
0
0
0
Order By: Relevance
“…The Deep Listwise Context Model proposed by Ai et al [1] also models the list context with a recurrent neural network, yet this model is not used to actually predict listwide signals. The most related work to ours is the Personalized Re-ranking with Contextualized Transformer for Recommendation (PEAR), recently proposed by Li et al [24]. In the personalized recommendation setting, the PEAR model combines a pointwise ranking objective, i.e.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The Deep Listwise Context Model proposed by Ai et al [1] also models the list context with a recurrent neural network, yet this model is not used to actually predict listwide signals. The most related work to ours is the Personalized Re-ranking with Contextualized Transformer for Recommendation (PEAR), recently proposed by Li et al [24]. In the personalized recommendation setting, the PEAR model combines a pointwise ranking objective, i.e.…”
Section: Related Workmentioning
confidence: 99%
“…This kind of transformation can be learned end-to-end by applying an attention mechanism over š‘æ š¼ , e.g. through the use of the Transformer [38] architecture that has seen tremendous success in Natural Language Processing (NLP) and has recently also been applied to ranking problems [24,30,34,37].…”
Section: The Listwise Transformermentioning
confidence: 99%