2019 IEEE International Conference on Data Mining (ICDM) 2019
DOI: 10.1109/icdm.2019.00047
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Distillation for Top-N Recommendation

Abstract: Knowledge distillation (KD) is a well-known method to reduce inference latency by compressing a cumbersome teacher model to a small student model. Despite the success of KD in the classification task, applying KD to recommender models is challenging due to the sparsity of positive feedback, the ambiguity of missing feedback, and the ranking problem associated with the top-N recommendation. To address the issues, we propose a new KD model for the collaborative filtering approach, namely collaborative distillati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
69
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 31 publications
(69 citation statements)
references
References 25 publications
0
69
0
Order By: Relevance
“…• Ranking Distillation (RD) [25]: A KD method for recommender system that uses items with the highest ranking from the teacher's predictions for distilling the knowledge. • Collaborative Distillation (CD) [13]: The state-of-the-art KD method for recommender system. CD samples items from teacher's predictions based on their ranking, then uses them for distillation.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…• Ranking Distillation (RD) [25]: A KD method for recommender system that uses items with the highest ranking from the teacher's predictions for distilling the knowledge. • Collaborative Distillation (CD) [13]: The state-of-the-art KD method for recommender system. CD samples items from teacher's predictions based on their ranking, then uses them for distillation.…”
Section: Methodsmentioning
confidence: 99%
“…We propose a new method-RRD-that transfers the knowledge from the teacher's predictions with direct consideration of ranking orders among items. Unlike the existing methods [13,25] that distill the knowledge of an item at a time, RRD formulates this as a ranking matching problem between the recommendation list of the teacher and that of the student. To this end, RRD adopts the list-wise learning-to-rank approach [29] and learns to ensure the student to preserve the ranking orders predicted by the teacher.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations