2010
DOI: 10.1016/j.neucom.2009.11.037
|View full text |Cite
|
Sign up to set email alerts
|

A new kernelization framework for Mahalanobis distance learning algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 68 publications
(58 citation statements)
references
References 20 publications
0
58
0
Order By: Relevance
“…Many existing distance learning methods are not intuitively kernelizable. Recently, Chatpatanasiri et al, [16] showed various techniques of kernelizing some popular metric learning approaches. Their results are easily extended to this approach in order to learn nonlinear metrics.…”
Section: Mirror Descent For Metric Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Many existing distance learning methods are not intuitively kernelizable. Recently, Chatpatanasiri et al, [16] showed various techniques of kernelizing some popular metric learning approaches. Their results are easily extended to this approach in order to learn nonlinear metrics.…”
Section: Mirror Descent For Metric Learningmentioning
confidence: 99%
“…There are two primary approaches to kernelizing metric learning algorithms: one based on the direct application of the kernel trick, and the other based on the application of the Kernel Principal Components Analysis (KPCA) framework [16]. We use the first approach here.…”
Section: Kernel Mdmlmentioning
confidence: 99%
“…Since we have n = ℓ + u total examples, the span of {φ i } has dimensionality n by our assumption. According to [16], each example φ i can be represented as ϕ i ∈ R n with respect to a new…”
Section: The Kpca-trick Algorithmmentioning
confidence: 99%
“…The success of SVMs is therefore, highly dependent on the choice of kernels and usual ones, such the linear, the gaussian and the histogram intersection, may not be appropriate in order to capture the actual and the semantic similarity between images for some specific concepts. Better kernels based on tuning Mahalanobis distances were obtained by minimizing the ratio between intra and inter class distances [7,24,28] while others were designed using semidefinite programming [30]. In order to take extra advantage from different settings, multiple kernels (MKL) were also introduced [1,2,39,43,51] and consider convex (and possibly sparse) linear combinations of elementary kernels and proved to be more suitable [47].…”
Section: Introductionmentioning
confidence: 99%