2020
DOI: 10.3390/app10144756
|View full text |Cite
|
Sign up to set email alerts
|

Personalized Standard Deviations Improve the Baseline Estimation of Collaborative Filtering Recommendation

Abstract: Baseline estimation is a critical component for latent factor-based collaborative filtering (CF) recommendations to obtain baseline predictions by evaluating global deviations for both users and items from personalized ratings. Classical baseline estimation presupposes that the user’s factual rating range is the same as the system’s given rating range. However, from observations on real datasets of movie recommender systems, we found that different users have different actual rating ranges, and users c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 29 publications
(80 reference statements)
0
5
0
Order By: Relevance
“…When item j is a very popular commodity and many people like it, then it will be very close to 1. Therefore, the above formula makes many items have a great similarity with popular products, which is obviously not a good feature of a recommendation system [9]. To avoid recommending hot items, the following formula can be used:…”
Section: Commodity Recommendation Function Strategymentioning
confidence: 99%
“…When item j is a very popular commodity and many people like it, then it will be very close to 1. Therefore, the above formula makes many items have a great similarity with popular products, which is obviously not a good feature of a recommendation system [9]. To avoid recommending hot items, the following formula can be used:…”
Section: Commodity Recommendation Function Strategymentioning
confidence: 99%
“…However, in the literature (see, for example, [1][2][3][12][13][14][15]), the evaluation and the comparison of the quality of a recommender system is still heavily dominated by accuracy. Indeed, due to the way they are optimized (i.e., in maximizing/minimizing accuracy metrics, in a standard crossvalidation framework), recommender systems are mainly penalized in accuracy evaluation when they suggest items that are relevant but whose relevance is related to these non-accuracy-focused dimensions (even though users increasingly give credit to these dimensions), leading to the exclusive use of accuracy measures as evaluation metrics, or by supplementing (rather than integrating) accuracy-optimized metrics with non-accuracy-based metrics.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…Our work aims to partially address the need to validate non-accuracy-focused algorithms in a fair way (i.e., compared to accuracy-focused algorithms). In addition, our model, which is inspired by validation processes where implicit feedback is available (through direct contact with users) postulates that, if the aim is to evaluate whether a recommender system is accurate in suggesting novel items, this should not be performed (as is usually the case in the literature, see, for example, [1][2][3]14,15]) by providing popular items as test examples, but rather more novel items (thus showing the actual ability of the system to be accurate in recommending novel items), which is precisely the goal of our approach. Note therefore that our model does not claim to deal with biases inherent in existing datasets (such as, for example, the two datasets used in this work-see Section 4.1), or biases inherent in existing algorithms (for such biases, see for example the recent work on fairness for recommender systems [23][24][25]), but rather to deal with popularity bias related to the evaluation processes traditionally applied in the field of recommender systems (i.e., random crossvalidation or bootstrapping), which leads to the fact that the joint comparison of accuracy and another dimension (such as, for example, novelty) by traditional processes is not fair.…”
Section: Evaluation Processmentioning
confidence: 99%
“…For instance, a RS issuing recommendations of movies will always consider that renowned films, like The Godfather, are good recommendations for a user, independently of his/her particular idiosyncrasy. To overcome this problem, in [2] the authors propose a model to alleviate this 'baseline course' by correcting the implicit bias of the system. For this purpose, the authors formulate a unified baseline estimation model based on the standard deviation of the user's features from the average system's features.…”
Section: Recommendation Modelsmentioning
confidence: 99%