2009
DOI: 10.1016/j.patcog.2009.01.025
|View full text |Cite
|
Sign up to set email alerts
|

Sparse multinomial kernel discriminant analysis (sMKDA)

Abstract: Dimensionality reduction via canonical variate analysis (CVA) is important for pattern recognition and has been extended variously to permit more flexibility, e.g. by "kernelizing" the formulation. This can lead to over-fitting, usually ameliorated by regularization. Here, a method for sparse, multinomial kernel discriminant analysis (sMKDA) is proposed, using a sparse basis to control complexity. It is based on the connection between CVA and least-squares, and uses forward selection via orthogonal least-squar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…Methods in the nature-inspired group are such as binary particle swarm optimization, GA [8,9], binary flower pollination [10], and binary cuckoo search [11]. The wrapper method of our interest was GA. Lastly, there have been extensive researches on embedded methods such as sparsity control by using l q -norm [12], Jeffrey's Hyperprior [13], canonical variate analysis [14]. Moreover, the common embedded methods for bioinformatic tasks are such as random forest, weight vector of SVM, and decision tree [4,7], but they were not used in our work because filter and wrapper methods have been reported to be more stable for feature selection task [7].…”
Section: Introductionmentioning
confidence: 99%
“…Methods in the nature-inspired group are such as binary particle swarm optimization, GA [8,9], binary flower pollination [10], and binary cuckoo search [11]. The wrapper method of our interest was GA. Lastly, there have been extensive researches on embedded methods such as sparsity control by using l q -norm [12], Jeffrey's Hyperprior [13], canonical variate analysis [14]. Moreover, the common embedded methods for bioinformatic tasks are such as random forest, weight vector of SVM, and decision tree [4,7], but they were not used in our work because filter and wrapper methods have been reported to be more stable for feature selection task [7].…”
Section: Introductionmentioning
confidence: 99%
“…where M n = P T n P n + Λ n The computation involved in (22) is unacceptably large, to further simplify the LOO error calculation, a residual matrix is introduced as:…”
Section: B Locally Regularized Forward Selectionmentioning
confidence: 99%
“…and h i is the i th diagonal element of the inverse recursive matrix M −1 in (22). Suppose h k = diag(M −1 k ) is defined, then this vector can be updated recursively as follows (see Appendix for more details):…”
Section: Updating Regularisation Parametersmentioning
confidence: 99%
See 1 more Smart Citation
“…All such relative comparisons derived in each neighborhood on the manifold are enumerated and maintained in lowdimensional manifold to be learned. Sparse Multinomial Kernel Discriminant Analysis (sMKDA) [13] is a method for sparse, multinomial kernel discriminant analysis. It is based on the connection between Canonical Variate Analysis (CVA) [14] and least-squares and uses forward selection via orthogonal least-squares to approximate a basis, generalizing a similar approach for binomial problems.…”
Section: Introductionmentioning
confidence: 99%