2014 IEEE Conference on Computer Vision and Pattern Recognition 2014
DOI: 10.1109/cvpr.2014.138
|View full text |Cite
|
Sign up to set email alerts
|

Fantope Regularization in Metric Learning

Abstract: This paper introduces a regularization method to explicitly control the rank of a learned symmetric positive semidefinite distance matrix in distance metric learning. To this end, we propose to incorporate in the objective function a linear regularization term that minimizes the k smallest eigenvalues of the distance matrix. It is equivalent to minimizing the trace of the product of the distance matrix with a matrix in the convex hull of rank-k projection matrices, called a Fantope. Based on this new regulariz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(32 citation statements)
references
References 19 publications
0
32
0
Order By: Relevance
“…In [23], the authors add a regularization function to the convex objective that biases the solution such that similar points in a small neighborhood lie on a low-dimensional manifold. Law et al [12] also use a regularization function that elegantly incorporates explicit control over the rank of the learned Mahalanobis matrix in the objective function. An ADMM based algorithm is proposed to learn the low-rank metric.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In [23], the authors add a regularization function to the convex objective that biases the solution such that similar points in a small neighborhood lie on a low-dimensional manifold. Law et al [12] also use a regularization function that elegantly incorporates explicit control over the rank of the learned Mahalanobis matrix in the objective function. An ADMM based algorithm is proposed to learn the low-rank metric.…”
Section: Related Workmentioning
confidence: 99%
“…Without a regularizer in the objective function, the technique requires an early stopping criterion, which is difficult to tune for each dataset [12]. Other methods [7,9,11] take a projection free approach and use special regularization functions like the log det Bregman divergence, which implicitly maintains the positive semidefiniteness constraint and the rank of the learned Mahalanobis matrix.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…It is to be noted that other regularization schemes could also be used, e.g. based on the Frobenius, nuclear norm [20] or methods giving an explicit control of the matrix rank as [15].…”
Section: Optimizationmentioning
confidence: 99%