2014
DOI: 10.1007/978-3-319-10605-2_44
|View full text |Cite
|
Sign up to set email alerts
|

Large Margin Local Metric Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(18 citation statements)
references
References 15 publications
0
18
0
Order By: Relevance
“…Figure 2 shows examples of an image affected by the three levels of noise we tested: none, medium and strong. We compare our method, Uncertainty-Aware Joint Bayesian (UA-JB), to three other methods: Joint Bayesian (JB) (Chen et al, 2012) to which our method is equivalent in the absence of noise, ITML (Davis et al, 2007) and LMLML (Bohné et al, 2014) in single metric mode. We start by reducing the dimensionality to 100 using UA-PPCA for UA-JB and standard PCA for the three others as prescribed by the authors.…”
Section: Mnistmentioning
confidence: 99%
See 1 more Smart Citation
“…Figure 2 shows examples of an image affected by the three levels of noise we tested: none, medium and strong. We compare our method, Uncertainty-Aware Joint Bayesian (UA-JB), to three other methods: Joint Bayesian (JB) (Chen et al, 2012) to which our method is equivalent in the absence of noise, ITML (Davis et al, 2007) and LMLML (Bohné et al, 2014) in single metric mode. We start by reducing the dimensionality to 100 using UA-PPCA for UA-JB and standard PCA for the three others as prescribed by the authors.…”
Section: Mnistmentioning
confidence: 99%
“…For each experiment we have evaluated the performance of all the methods on a test set of HR images and a test set of LR images. The results of the proposed method (UA-JB), Joint Bayesian (Chen et al, 2012), ITML (Davis et al, 2007) and LMLML (Bohné et al, 2014) are presented in Table 4. UA-JB performs well in all configurations and it worths noticing that, thanks to the use of the uncertainty, it is more robust than other methods.…”
Section: Resolution Changementioning
confidence: 99%
“…Since discriminative power of input features might vary between different neighbors, learning a global metric may be suboptimal. This has motivated the development of local metric learning approaches [16,1,13,42,2], which increase the discriminative power of global Mahalanobis metric learning by learning a number of local metrics.…”
Section: Local Metric Learningmentioning
confidence: 99%
“…M (x i , x j ). The definitions of M (x i , x j ), such as k w k (x i , x j )M k in [20] where w k is defined as P (k|x i ) + P (k|x j ) to guarantee the symmetry and P (k|x i ) or P (k|x j ) is based on the posterior probability that the point x belongs to the kth Gaussian cluster in a Gaussian mixture (GMM), are nonetheless not very intuitive.…”
Section: Introductionmentioning
confidence: 99%