2011
DOI: 10.1007/s10462-011-9205-2
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection for support vector machines with RBF kernel

Abstract: Linear kernel Support Vector Machine Recursive Feature Elimination (SVM-RFE) is known as an excellent feature selection algorithm. Nonlinear SVM is a black box classifier for which we do not know the mapping function explicitly. Thus, the weight vector w cannot be explicitly computed. In this paper, we proposed a feature selection algorithm utilizing Support Vector Machine with RBF kernel based on Recursive Feature Elimination (SVM-RBF-RFE), which expands nonlinear RBF kernel into its Maclaurin series, and the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 80 publications
(45 citation statements)
references
References 37 publications
0
45
0
Order By: Relevance
“…where we look at new examples and assign them to classes based on the features we have learned about during training [12]. The task is to introduce a hypothesis (classifier) that accurately predicts the labels of novel instances.…”
Section: Classification (With Assessment) -This Ismentioning
confidence: 99%
“…where we look at new examples and assign them to classes based on the features we have learned about during training [12]. The task is to introduce a hypothesis (classifier) that accurately predicts the labels of novel instances.…”
Section: Classification (With Assessment) -This Ismentioning
confidence: 99%
“…For two features that are highly dependent on each other, removing any one of them will not bring about much change in the class discriminative power of the feature. In the study of [23] the dependency between a feature variable x i and the class label y was maximized using Equation (26), while using Equation (27) they minimized the dependency of pair features x i and x j . This constraint is used to filter out only mutually exclusive features.…”
Section: Feature Dimensionality Reductionmentioning
confidence: 99%
“…To produce the classification results, the test feature vectors, however, were tested against all the trained SVM models. The majority voting scheme is finally employed to predict the class of ensemble outputs, using the strategy of the winner takes all [25,26]. In each round of the experiment, 21 unique SVM classifiers are trained to classify the seven basic expressions.…”
Section: Support Vector Machinementioning
confidence: 99%
“…A common drawback of these techniques is that they have a higher risk of overfitting than filter methods and are computationally intensive, especially if building the classifier has a high computational cost [9]. Additional work has been done to assess variable importance in non-linear kernels SVM by modifying SVM-RFE [3, 10, 11]. …”
Section: Introductionmentioning
confidence: 99%