2011
DOI: 10.1109/tpami.2010.173
|View full text |Cite
|
Sign up to set email alerts
|

Kernel Optimization in Discriminant Analysis

Abstract: Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 86 publications
(10 citation statements)
references
References 29 publications
0
10
0
Order By: Relevance
“…Recent years, the kernel optimization based algorithms are popular and competitive. It is necessary to compare our algorithm with some kernel optimization based algorithms mentioned in [36] on the AR database. For simplification, we also resized the images into 29×21 pixels.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent years, the kernel optimization based algorithms are popular and competitive. It is necessary to compare our algorithm with some kernel optimization based algorithms mentioned in [36] on the AR database. For simplification, we also resized the images into 29×21 pixels.…”
Section: Resultsmentioning
confidence: 99%
“…The images in session 1 were used for training and those in session 2 for testing. One could refer to [36] for details. The results are shown in Table 4 (some results originate from [36] directly).…”
Section: Resultsmentioning
confidence: 99%
“…However, to achieve this, the newly derived objective function needs to be combined with the classical one measuring its fitness (i.e., how well the function estimates the sample vectors). Classical solutions would be to use the sum or product of the two objective functions [47]. However, we have shown that these solutions do not generally yield desirable results in kernel methods in regression.…”
Section: Discussionmentioning
confidence: 99%
“…The most pressing question for us is to show that this derived solution yields lower prediction errors than simpler, more straight forward approaches. Two such criteria are the sum and product of the two terms to be minimized [47], given by Qsum(boldθ)=Ef(boldθ)+νEc(boldθ). and Qpro(boldθ)=Ef(boldθ)Ec(boldθ)γ, where ν and γ are regularization parameters needed to be selected. Note that minimizing (30) is equivalent to minimizing lgQpro(boldθ)=lgEf(boldθ)+γlgEc(boldθ). which is the logarithm of (30).…”
Section: Multiobjective Optimizationmentioning
confidence: 99%
See 1 more Smart Citation