2022
DOI: 10.1109/tkde.2021.3049692
|View full text |Cite
|
Sign up to set email alerts
|

: Hybrid Associations Models for Sequential Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 34 publications
(33 citation statements)
references
References 23 publications
0
33
0
Order By: Relevance
“…The primary difference among M 2 -gp 2 t, Dream and FPMC is that M 2 -gp 2 t explicitly models users' general preferences using the frequencies of the items that each user has interactions with, while Dream and FPMC implicitly model them using the hidden state of RNNs or user embeddings. Given the sparse nature of recommendation datasets (Table 2), it is possible that the learned hidden states or user embeddings cannot represent the user preferences well, as the signals of user preferences are smoothed out due to data sparsity during the recurrent updates, or by the pooling or weighting schemes used to learn user embeddings as some other work also noticed [18], [36], [37]. The superior performance of M 2 -gp 2 t over Dream and FPMC on all the datasets demonstrates the effect of explicitly modeling users' general preferences.…”
Section: Table 3 Also Shows That Among the 4 Model-based Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The primary difference among M 2 -gp 2 t, Dream and FPMC is that M 2 -gp 2 t explicitly models users' general preferences using the frequencies of the items that each user has interactions with, while Dream and FPMC implicitly model them using the hidden state of RNNs or user embeddings. Given the sparse nature of recommendation datasets (Table 2), it is possible that the learned hidden states or user embeddings cannot represent the user preferences well, as the signals of user preferences are smoothed out due to data sparsity during the recurrent updates, or by the pooling or weighting schemes used to learn user embeddings as some other work also noticed [18], [36], [37]. The superior performance of M 2 -gp 2 t over Dream and FPMC on all the datasets demonstrates the effect of explicitly modeling users' general preferences.…”
Section: Table 3 Also Shows That Among the 4 Model-based Methodsmentioning
confidence: 99%
“…Table 3 shows that M 2 -gp 2 t significantly outperforms Sets2Sets on all the datasets except TMall in terms of both recall@k and NDCG@k. The primary differences between M 2 -gp 2 t and Sets2Sets are 1) M 2 -gp 2 t explicitly models the transition patterns among items using encoder-decoderbased ed-Trans, while Sets2Sets implicitly models the transition patterns using RNNs, and 2) when calculating the recommendation scores, M 2 -gp 2 t learns a single weight on each user (i.e., α in Equation 10), but Sets2Sets learns different weights for different items on each user. Given the sparse nature of the recommendation datasets, weights for different items on each user may not be well learned [18], [36]. Thus, such weights may not necessarily help better differentiate user general preferences over items.…”
Section: Table 3 Also Shows That Among the 4 Model-based Methodsmentioning
confidence: 99%
See 3 more Smart Citations