Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3412754
|View full text |Cite
|
Sign up to set email alerts
|

GraphSAIL

Abstract: Given the convenience of collecting information through online services, recommender systems now consume large scale data and play a more important role in improving user experience. With the recent emergence of Graph Neural Networks (GNNs), GNN-based recommender models have shown the advantage of modeling the recommender system as a user-item bipartite graph to learn representations of users and items. However, such models are expensive to train and difficult to perform frequent updates to provide the most up… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
31
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 46 publications
(31 citation statements)
references
References 21 publications
0
31
0
Order By: Relevance
“…In the online recommendation services, Xu et al. developed a framework based on incremental learning graph neural networks to address the catastrophic forgetting problem faced during incremental learning training, which implements a graph structure preservation strategy to preserve users’ long-term preferences when the model is updated [ 49 ]. In the personalized video highlight recommendations, Wu et al.…”
Section: Related Workmentioning
confidence: 99%
“…In the online recommendation services, Xu et al. developed a framework based on incremental learning graph neural networks to address the catastrophic forgetting problem faced during incremental learning training, which implements a graph structure preservation strategy to preserve users’ long-term preferences when the model is updated [ 49 ]. In the personalized video highlight recommendations, Wu et al.…”
Section: Related Workmentioning
confidence: 99%
“…Through treating past models as the teacher model, Wang et al [94] regularize the learning process with a knowledge distillation loss to prevent catastrophic forgetting. Additionally, subsequent studies employ distillation techniques with specific attention to GNNs [99] and session-based RSs [66]. However, it is important to note that simply limiting the change in networks will not lead to transferring useful knowledge to current tasks.…”
Section: Model-based Approachmentioning
confidence: 99%
“…In other words, some of knowledge in past models might not be applicable for the current task. Zhang et al [109] introduce a method for explicitly optimizing the next period's data, which greatly alleviates the problem of [66,94,99]. To optimize for future use, it develops a transfer module to combine knowledge of different periods' models.…”
Section: Model-based Approachmentioning
confidence: 99%
“…Due to the strong capability of modeling user-item, user-user, and item-item relationships, graph convolution neural network (GCN) has been widely-used as recommendation models. Some recent works [1,51,52,56,61] develop continual learning methods for GCN-based recommendation methods to achieve the streaming recommendation, also known as continual graph learning for streaming recommendation.…”
Section: Introductionmentioning
confidence: 99%
“…To enable continual GCN-based recommendation, most works focus on two realizations: experience replay [1,52,68] and knowledge distillation/weight regularization [29,51,56,61]. Although these methods have achieved acceptable results, there are still drawbacks which hinder them from being applied in the real-world systems.…”
Section: Introductionmentioning
confidence: 99%