2014
DOI: 10.1016/j.ins.2014.03.063
|View full text |Cite
|
Sign up to set email alerts
|

List-wise probabilistic matrix factorization for recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(29 citation statements)
references
References 15 publications
0
29
0
Order By: Relevance
“…Other methods optimize discounted cumulative gain (DCG) or normalized DCG (NDCG) can be found in [10], [11]. Examples of list-wise methods that optimize the probability of permutations that map items to ranks include: ListPMF, which represents each user as a probability distribution of the permutations over rated items based on the Plackett-Luce model [12]; ListRank [13], which aims to identify a ranking permutation that minimizes the cross-entropy between the distribution of the observed ranking of items based on user ratings and the predicted rankings with respect to the topranked item; or methods that optimize the log-posterior over the predicted preference order with the observed preference orders [12]; and methods that leverage deep neural nets (e.g., [4]) to learn the non-linear interaction between user-item pairs (See [14] for a survey of such methods).…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other methods optimize discounted cumulative gain (DCG) or normalized DCG (NDCG) can be found in [10], [11]. Examples of list-wise methods that optimize the probability of permutations that map items to ranks include: ListPMF, which represents each user as a probability distribution of the permutations over rated items based on the Plackett-Luce model [12]; ListRank [13], which aims to identify a ranking permutation that minimizes the cross-entropy between the distribution of the observed ranking of items based on user ratings and the predicted rankings with respect to the topranked item; or methods that optimize the log-posterior over the predicted preference order with the observed preference orders [12]; and methods that leverage deep neural nets (e.g., [4]) to learn the non-linear interaction between user-item pairs (See [14] for a survey of such methods).…”
Section: A Related Workmentioning
confidence: 99%
“…The main contributions of this paper can be summarized as follows: We compared the performance of Top-N-Rank and Top-N-Rank.ReLU with several state-of-the-art list-wise LTR methods [3], [10], [12], [13], [16] using the MovieLens (20M) data set [17] and the Amazon video games data set [18]. All experiments were performed on Apache Spark cluster [19] and the raw data are stored on Hadoop Distribute File System (HDFS).…”
Section: B Overview and Contributionsmentioning
confidence: 99%
“…These include recommending item ratings [34], tags [12], documents [14], friends [5,15], experts [21,13] and many others in social networks. Various recommendation algorithms are exploited in these systems, ranging from the canonical collaborative filtering [16], graph-based propagation [12,14], to the new list-wise probabilistic matrix factorization [22], etc. The enthusiasm in this area is expected to last for the years to come as many new business models in social networks are highly dependent on recommendation accuracy.…”
Section: Recommendation In Social Networkmentioning
confidence: 99%
“…[10,31] used view information to enrich positive samples. [6,25] proposed listwise ranking methods instead of pairwise ones. Hwang et al [20] utilized both implicit and explicit feedback data to improve the quality of negative sampling.…”
Section: Learning To Rankmentioning
confidence: 99%