Proceedings of the 13th International Conference on Web Search and Data Mining 2020
DOI: 10.1145/3336191.3371776
|View full text |Cite
|
Sign up to set email alerts
|

Consistency-Aware Recommendation for User-Generated Item List Continuation

Abstract: User-generated item lists are popular on many platforms. Examples include video-based playlists on YouTube, image-based lists (or "boards") on Pinterest, book-based lists on Goodreads, and answerbased lists on question-answer forums like Zhihu. As users create these lists, a common challenge is in identifying what items to curate next. Some lists are organized around particular genres or topics, while others are seemingly incoherent, reflecting individual preferences for what items belong together. Furthermore… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(17 citation statements)
references
References 36 publications
0
17
0
Order By: Relevance
“…Furthermore, we devise a personalized scheme to adaptively aggregate the output of these modules. This is in contrast to many existing works that combine local and global modules by simple summation and concatenation [3,17,19,30,55,56,58] either without personalized adaptation or without learnable importance adjustment.…”
Section: Introductionmentioning
confidence: 82%
See 2 more Smart Citations
“…Furthermore, we devise a personalized scheme to adaptively aggregate the output of these modules. This is in contrast to many existing works that combine local and global modules by simple summation and concatenation [3,17,19,30,55,56,58] either without personalized adaptation or without learnable importance adjustment.…”
Section: Introductionmentioning
confidence: 82%
“…Moreover, the coefficients differ for different users, which shows that it self-adaptively learns different patterns for personalized users. This further illustrates that it is not an ideal way to set a static and equal trade-off between local and global branches, as many works [3,17,19,30,55,56,58] did in the past. In our adaptive paradigm, the model can fuse information with the dynamical use of locality bias (CNNs) to improve the performance of the Transformer.…”
Section: Stability Study (Rq3)mentioning
confidence: 95%
See 1 more Smart Citation
“…Specifically, self-attention based methods, such as SAS-Rec [11] and BERT4Rec [25] have been regarded with strong potential due to their ability to capture long-range dependencies between items. Recently, many improvements on self-attention based solutions are proposed, taking into consideration personalization [28], item similarity [15], consistency [7], multiple interests [5], information dissemination [29], pseudo-prior items augmentation [18], motifs [3], etc. However, most of the current SR methods often assume that only the item IDs are available and do not take the side information like item attributes into consideration, ignoring the fact that such highly-related information could provide extra supervision signals.…”
Section: Related Work 21 Sequential Recommendationmentioning
confidence: 99%
“…Sequential recommendation methods usually discover the internal connections between previously purchased items to accurately predict the next item. Generally, sequence models are divided into two main types: (1) the order-based models view user sequences as item orders, mainly mining the relationship between commodity sequences, such as Markov Chains [22], recurrent neural network RNN [23,24], convolutional neural network [CNN] [25,26] and attention mechanism [27]; GRU4Rec used a gated recurrent unit (GRU) to build session-based recommendations [28] Modular click sequence, and the improved version further improves its Top-N recommendation performance [29]. However, in each time step, the RNN takes the state and current operation of the last step as its input.…”
Section: Sequence Recommendationmentioning
confidence: 99%