Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512098
|View full text |Cite
|
Sign up to set email alerts
|

Disentangling Long and Short-Term Interests for Recommendation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 66 publications
(19 citation statements)
references
References 36 publications
0
19
0
Order By: Relevance
“…We take the mean representation of short-term behaviors as the proxy of short-term interests. As for long-term interests, we argue that directly mean representation of entire long-term behaviors applied by Zheng [32] will lead to sub-optimal performance. Long-term user preference is relatively stable and updates slowly, if we mean representations simply, the temporal variability of long-term interests would be ignored.…”
Section: Interests Disentanglement Modulementioning
confidence: 97%
See 3 more Smart Citations
“…We take the mean representation of short-term behaviors as the proxy of short-term interests. As for long-term interests, we argue that directly mean representation of entire long-term behaviors applied by Zheng [32] will lead to sub-optimal performance. Long-term user preference is relatively stable and updates slowly, if we mean representations simply, the temporal variability of long-term interests would be ignored.…”
Section: Interests Disentanglement Modulementioning
confidence: 97%
“…Desired by Zheng [32] who introduces a self-supervised framework for disentanglement, we adapt it for product search. We take the mean representation of short-term behaviors as the proxy of short-term interests.…”
Section: Interests Disentanglement Modulementioning
confidence: 99%
See 2 more Smart Citations
“…Hence 𝑆 2 -DHCN [44] conducts contrastive learning between representations of different hyper-graphs without random dropout, but its fixed ground truths restrict the improvements. CSLR [53] generates self-supervised signals via long-term and short-term interest but fails to further consider more effective signals. Besides, two works [44,49] about social recommendation [36] and session-based recommendation [5,12,13,26] address this problem by the self-supervised co-training framework.…”
Section: Self-supervised Learning In Sequential Recommendationmentioning
confidence: 99%