2012 IEEE 12th International Conference on Data Mining 2012
DOI: 10.1109/icdm.2012.38
|View full text |Cite
|
Sign up to set email alerts
|

Metric Learning from Relative Comparisons by Minimizing Squared Residual

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(38 citation statements)
references
References 9 publications
0
38
0
Order By: Relevance
“…By virtue of this training principle, [35] has significantly boosted the performance of kinship verification. Liu et al [33] developed a metric learning algorithm by minimizing squared residuals from relative comparisons, where pair-wise constraints were not natural to obtain. While, these two approaches could not be directly used to exploit the relative similarity information in metric learning for person re-identification, due to the unavailability of relative similarity information in practical person re-identification.…”
Section: Related Workmentioning
confidence: 99%
“…By virtue of this training principle, [35] has significantly boosted the performance of kinship verification. Liu et al [33] developed a metric learning algorithm by minimizing squared residuals from relative comparisons, where pair-wise constraints were not natural to obtain. While, these two approaches could not be directly used to exploit the relative similarity information in metric learning for person re-identification, due to the unavailability of relative similarity information in practical person re-identification.…”
Section: Related Workmentioning
confidence: 99%
“…Computing L 2 between data points is a strong baseline, which sometimes outperforms true distance learning methods, especially for high-dimensional data. Four of the distance-learning methods [36,10,26,21] learn a global linear transformation of the data, which is commonly referred to as the Mahalanobis metric learning. The value of the L 2 distance between transformed vectors is used as a proxy distance function.…”
Section: Proxying Distance Via Metrization and Symmetrizationmentioning
confidence: 99%
“…Depending on this performance, triplets are dynamically selected so that points near boundaries are used as constraint points. This change of triplets over the iterations gives more interesting results than the triplet selection method proposed in [23]. However, they only focus on points near boundaries; We propose to focus on points near boundaries, while also keeping also observations that are far from the boundaries.…”
Section: Metric Learning and Related Workmentioning
confidence: 99%
“…4) Relative labels [23]: are considering four different objects x i , x j , x k and x l for writing the constraint. In this setting, x i and x j are chosen so that their distance is lower than the distance between x k and x l : d(x i , x j ) ≤ d(x k , x l ).…”
Section: Dynamic Local Weights For Metric Learning a Problem Anmentioning
confidence: 99%
See 1 more Smart Citation