2018
DOI: 10.1016/j.patcog.2018.05.012
|View full text |Cite
|
Sign up to set email alerts
|

A Kernel Partial least square based feature selection method

Abstract: Highlights• The paper proposes a Kernel Partial Least Square (KPLS) based Feature Selection Method aiming for easy computation and improving classification accuracy for high dimensional data.• The proposed method makes use of KPLS regression coefficients to identify an optimal set of features, thus avoiding non-linear optimization.• Experiments were carried out on seven real life datasets with four different classifiers: SVM, LDA, Random Forest and Naïve Bayes.• Experimental results highlight the advantage of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(10 citation statements)
references
References 43 publications
0
10
0
Order By: Relevance
“…The minimum redundancy maximum relevance (mRMR) is a variable selection algorithm that tends to select variables having a high correlation with the response variable (relevance) and the least correlation between the selected variables (redundancy) . In PLS, the variables with minimum redundancy and maximum relevancy can be found by starting with a variable having maximal mutual information with the response y then it greedily adds variables from PLS LWs with a maximal value of J(W)=I(W,y)1|S|jSI(W,j), where S is the set of already selected variables and j present a respective variable in S. Variables are ranked based on mRMR, and the top m number of variables are marked as influential, where m can be determined through validation.…”
Section: Variable Selection Methods In Plsmentioning
confidence: 99%
“…The minimum redundancy maximum relevance (mRMR) is a variable selection algorithm that tends to select variables having a high correlation with the response variable (relevance) and the least correlation between the selected variables (redundancy) . In PLS, the variables with minimum redundancy and maximum relevancy can be found by starting with a variable having maximal mutual information with the response y then it greedily adds variables from PLS LWs with a maximal value of J(W)=I(W,y)1|S|jSI(W,j), where S is the set of already selected variables and j present a respective variable in S. Variables are ranked based on mRMR, and the top m number of variables are marked as influential, where m can be determined through validation.…”
Section: Variable Selection Methods In Plsmentioning
confidence: 99%
“…The novel fast searching algorithm that operates on sorted data sets, is developed based on the concept of straight line fitting (using least square method) [7,8] and linear regression [9]. The linear trend in the data sets is exploited as the data set is already sorted.…”
Section: Basic Ideamentioning
confidence: 99%
“…where, is the index of the ℎ element of the data set, is the ℎ data value and , are the coefficients that satisfy all data elements, is the size of the data set. The coefficients and are computed based on the curve fitting method [7,8]. The backbone of our proposed searching algorithms is the above obtained Eq.…”
Section: Basic Ideamentioning
confidence: 99%
See 1 more Smart Citation
“…Discriminant analysis algorithms have been used for dimensionality reduction and feature extraction in many applications of computer vision [52,53]. In most discriminant analysis algorithms, the transformation matrix is found by maximizing the Fisher-Rao's criterion [54].…”
Section: Features Selection Based On Ldamentioning
confidence: 99%