2018
DOI: 10.1177/0165551518808188
|View full text |Cite
|
Sign up to set email alerts
|

A similarity measure based on Kullback–Leibler divergence for collaborative filtering in sparse data

Abstract: In the neighbourhood-based collaborative filtering (CF) algorithms, a user similarity measure is used to find other users similar to an active user. Most of the existing user similarity measures rely on the co-rated items. However, there are not enough co-rated items in sparse dataset, which usually leads to poor prediction. In this article, a new similarity scheme is proposed, which breaks free of the constraint of the co-rated items. Moreover, an item similarity measure based on the Kullback–Leibler (KL) div… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
10

Relationship

1
9

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…In order to visually analyze the ability of different prediction methods to detect targets, we calculate the Kullback-Leibler (KL) divergence [43] between the probability distribution of the mean amplitude and the probability distribution of the MSE of different prediction methods, KL divergence is as follows (Equation (12):…”
Section: Sea Clutter Prediction Results In Different Range Cellsmentioning
confidence: 99%
“…In order to visually analyze the ability of different prediction methods to detect targets, we calculate the Kullback-Leibler (KL) divergence [43] between the probability distribution of the mean amplitude and the probability distribution of the MSE of different prediction methods, KL divergence is as follows (Equation (12):…”
Section: Sea Clutter Prediction Results In Different Range Cellsmentioning
confidence: 99%
“…The resulting KL divergence value is not a distance metric as it does not satisfy the triangle inequality and is asymmetric, meaning the divergence of p(x) from q(x) differs from the divergence of q(x) from p(x). The KL divergence is one of the most popular ways to compare probability distributions in information theory and data science, and has mathematical properties that make it uniquely suitable for measuring relative information (reviewed in Deng et al, 2019). KL divergence from the prey to the predator SPD was computed in each case using the `philentropy::KL` function from the 'philentropy' library (Drost, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…Sahu A K et al apply item characters and user tags to matrix factorization, which solving data sparse problems with cross-domain recommender systems [33]. KLCF method [34]uses all user ratings to calculate similarity, and uses KL to calculate item similarity for weight adjustment, breaking the rule of using only common scoring items. Experiments show that this method can be applied to sparse matrices.…”
Section: Literature Reviewmentioning
confidence: 99%