Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021
DOI: 10.1145/3404835.3462968
|View full text |Cite
|
Sign up to set email alerts
|

Sequential Recommendation with Graph Neural Networks

Abstract: Sequential recommendation aims to leverage users' historical behaviors to predict their next interaction. Existing works have not yet addressed two main challenges in sequential recommendation. First, user behaviors in their rich historical sequences are often implicit and noisy preference signals, they cannot sufficiently reflect users' actual preferences. In addition, users' dynamic preferences often change rapidly over time, and hence it is difficult to capture user patterns in their historical sequences. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
75
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 266 publications
(75 citation statements)
references
References 30 publications
0
75
0
Order By: Relevance
“…The historical sessions are completely different from the dialogue/conversation history used in previous CRS works [2,39,41], as their "historical" information is actually the historical turns of the current dialogue session. The usages of historical sessions in CRS are also largely different from the historical user-item interactions [1,31] in traditional recommendations, since the recommendation in CRS is strongly constrained by the current user intentions, while traditional recommendations are not. Thus, the main goal of historical sessions learner is: learning current-related and regular information from historical sessions, without impeding the current session information.…”
Section: Motivations and Discussion On Uccrmentioning
confidence: 99%
See 1 more Smart Citation
“…The historical sessions are completely different from the dialogue/conversation history used in previous CRS works [2,39,41], as their "historical" information is actually the historical turns of the current dialogue session. The usages of historical sessions in CRS are also largely different from the historical user-item interactions [1,31] in traditional recommendations, since the recommendation in CRS is strongly constrained by the current user intentions, while traditional recommendations are not. Thus, the main goal of historical sessions learner is: learning current-related and regular information from historical sessions, without impeding the current session information.…”
Section: Motivations and Discussion On Uccrmentioning
confidence: 99%
“…Specifically, UCCR learns the multi-aspect user preferences mainly from three information sources, including the user's current dialogue session, historical dialogues sessions, and look-alike users. UCCR mainly consists of four parts: (1) We first design a historical session learner to capture users' diverse preferences in their historical sessions besides learning from the current session. Precisely, we extract multi-view user preferences from the dialogues, including the word-level semantic view, entity-level knowledge view, and item-level consuming view.…”
Section: Introductionmentioning
confidence: 99%
“…With respect to long-term interests modeling, we include NCF [16], DIN [51] and LightGCN [14]. For short-term interests modeling, we compare with Caser [41], GRU4REC [17], DIEN [50], SASRec [20] and SURGE [6]. We also include SLi-Rec [47] which is the state-of-the-art model of LS-term interests modeling.…”
Section: Methodsmentioning
confidence: 99%
“…We use two datasets to conduct experiments, including a public e-commerce dataset and an industrial short-video dataset, which are also adopted by the SOTA sequential recommendation model, SURGE [6]. Both of them are in million scale and collected from real-world applications.…”
Section: A Appendix A1 Datasetsmentioning
confidence: 99%
“…The core idea is that, instead of using a predefined distance function, we leverage metric learning [74] to learn a distance metric for the input space of data (i.e., the node embedding matrix H) from the adjacency matrix A that preserves the node relationships (i.e., A is used to supervise the distance learning). In this paper, we adopt a weighted cosine distance (defined in Equation 2) [8,83] as our learnable distance function φ.…”
Section: Graph Metric Learningmentioning
confidence: 99%