Proceedings of the 13th International Conference on Web Search and Data Mining 2020
DOI: 10.1145/3336191.3371790
|View full text |Cite
|
Sign up to set email alerts
|

Distilling Structured Knowledge into Embeddings for Explainable and Accurate Recommendation

Abstract: Recently, the embedding-based recommendation models (e.g., matrix factorization and deep models) have been prevalent in both academia and industry due to their effectiveness and flexibility. However, they also have such intrinsic limitations as lacking explainability and suffering from data sparsity. In this paper, we propose an end-to-end joint learning framework to get around these limitations without introducing any extra overhead by distilling structured knowledge from a differentiable path-based recommend… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 37 publications
(12 citation statements)
references
References 43 publications
0
12
0
Order By: Relevance
“…This is achieved by designing a distillation loss function that exploits the teacher model's trained parameters to guide the student model's learning. Recently, a few attempts have been made to design knowledge distillation models in recommendation systems, following the collaborative filtering strategy and focusing on the ranking performance in the top-n recommendation task [16], [17], [34], [35]. However, these studies focus on the conventional recommendation task and are not suitable for conversational systems, where users progressively express their personalized preferences over conversational turns.…”
Section: Discussionmentioning
confidence: 99%
“…This is achieved by designing a distillation loss function that exploits the teacher model's trained parameters to guide the student model's learning. Recently, a few attempts have been made to design knowledge distillation models in recommendation systems, following the collaborative filtering strategy and focusing on the ranking performance in the top-n recommendation task [16], [17], [34], [35]. However, these studies focus on the conventional recommendation task and are not suitable for conversational systems, where users progressively express their personalized preferences over conversational turns.…”
Section: Discussionmentioning
confidence: 99%
“…Knowledge graphs can also help to explain a blank-box recommendation model. Zhang et al (2020) proposed a knowledge distillation approach to explain black-box models for recommendation. The authors proposed an end-to-end joint learning framework to combine the advantages of embedding-based recommendation models and path-based recommendation models.…”
Section: Knowledge Graph-based Explainable Recommendationmentioning
confidence: 99%
“…Liu et al [6] leverage KD for debiasing in recommendation via uniform data. Zhang et al [31] study the mutual learning between path-based and embedding based recommendation models. Our work is partially inspired from these studies and serves as the first attempt to apply KD for ensembled CTR prediction.…”
Section: Knowledge Distillationmentioning
confidence: 99%