Proceedings of the 26th International Conference on World Wide Web 2017
DOI: 10.1145/3038912.3052639
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Metric Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
420
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 479 publications
(440 citation statements)
references
References 36 publications
3
420
0
Order By: Relevance
“…However, this is troublesome for the following two reasons. First, the sparse ground-truth labels {x u } treat all the unobserved (i.e., not clicked) items equally, while those unobserved items are by no means all irrelevant to that user [12] (otherwise, there is no point making recommendations). This makes it difficult, at least inefficient, for the embedding-based model to learn the underlying structure in data.…”
Section: Methodology 21 Problem Formulation and Preliminariesmentioning
confidence: 99%
“…However, this is troublesome for the following two reasons. First, the sparse ground-truth labels {x u } treat all the unobserved (i.e., not clicked) items equally, while those unobserved items are by no means all irrelevant to that user [12] (otherwise, there is no point making recommendations). This makes it difficult, at least inefficient, for the embedding-based model to learn the underlying structure in data.…”
Section: Methodology 21 Problem Formulation and Preliminariesmentioning
confidence: 99%
“…Conventional MFbased methods operate in inner product spaces, which are flexible but can easily overfit. To this end, CML [15] imposes the triangle inequality constraint, by adopting metric embeddings to represent users and items. Here the preference score is estimated by the negative ℓ 2 distance:…”
Section: Collaborative Metric Learning (Cml)mentioning
confidence: 99%
“…To achieve satisfactory performance in terms of both accuracy and efficiency, we propose to use the retrieved items as candidates, and adopt sophisticated ranking models to refine the results. As the preference ranking problem has been heavily studied [13,15,32], we employ existing models to study the effect of the CIGAR framework, and propose a candidate-oriented sampling strategy to further boost accuracy.…”
Section: Candidate-oriented Re-rankingmentioning
confidence: 99%
“…where u i , v j are, respectively, user and item latent vectors in R d , B ⊂ S is the set of positive pairs in the considered mini-batch, N i j ⊂ {k |(i, k) S } is a set of negative samples per triplet, α > 0 is a margin constant, D is the Euclidean distance and w i j is a weight based on the number of negatives in N i j falling inside the α-ball to penalize items at a lower rank [20], [.] + = max(., 0), L c is regularization term (weighted by the hyper parameter λ c ) used to de-correlate the dimensions in the learned metric [7]. The recommendation for an user is then made by finding the k nearest items around her/him in the latent space.…”
Section: Collaborative Metric Learningmentioning
confidence: 99%