Boise State ScholarWorks
DOI: 10.18122/b2r717
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Explanations for Matrix Factorization Recommender Systems

Abstract: In this paper we address the problem of finding explanations for collaborative filtering algorithms that use matrix factorization methods. We look for explanations that increase the transparency of the system. To do so, we propose two measures. First, we show a model that describes the contribution of each previous rating given by a user to the generated recommendation. Second, we measure the influence of changing each previous rating of a user on the outcome of the recommender system. We show that under the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0
1

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 3 publications
0
7
0
1
Order By: Relevance
“…Explanations can also be used to justify or describe [80]. While early recommender system explanation approaches provided a uniform explanation style for single-source collaborative filtering [30], more recent work explores how to derive explanations for recommender systems based on 'hybrid' multiple sources [36] and matrix factorisation [61]. However, a reported need for explanation may not always correspond to differences in behaviour or performance; in a study of news recommender systems, end-users expressed a desire for explanations, but the number of news items they opened did not change when provided with reasons for their recommendations [70].…”
Section: Interpreting Intelligent Systemsmentioning
confidence: 99%
“…Explanations can also be used to justify or describe [80]. While early recommender system explanation approaches provided a uniform explanation style for single-source collaborative filtering [30], more recent work explores how to derive explanations for recommender systems based on 'hybrid' multiple sources [36] and matrix factorisation [61]. However, a reported need for explanation may not always correspond to differences in behaviour or performance; in a study of news recommender systems, end-users expressed a desire for explanations, but the number of news items they opened did not change when provided with reasons for their recommendations [70].…”
Section: Interpreting Intelligent Systemsmentioning
confidence: 99%
“…Because there is usually a tradeoff between explainability and recommendation accuracy, some research has focused on post-hoc explanainability of powerful black-box models. Such work includes ( Rastegarpanah et al, 2017 ) which explains MF-based recommender systems using influence functions to determine the effect of each user rating on the recommendation. Cheng et al (2019) also uses an influence-based approach to measure the impact of user-item interactions on a prediction and provides neighborhood-style explanations.…”
Section: Related Workmentioning
confidence: 99%
“…{1, 2, 3, 4, 5}, when applying the item-based approach. Equation (12) details how to compute the probability that the item i 1 be rated with 1 according to equation 3.…”
Section: E Running Examplementioning
confidence: 99%
“…In the context of RS, the main problem of matrix factorization is that the learnt latent space is not easy to interpret [11], so these models are not amenable to explaining their results [12]. The proposed model in [10] has been developed in order to alleviate this problem by applying a probabilistic approach for interpreting the factors of users and items.…”
Section: Introductionmentioning
confidence: 99%