2004
DOI: 10.1007/978-3-540-24844-6_81
|View full text |Cite
|
Sign up to set email alerts
|

Selection of the Linearly Separable Feature Subsets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2005
2005
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…It is true that numerous data mining methods suffer from the so-called curse of dimensionality. Also neural networks have not been adapted for high-dimensional data; it can be rather recommended to use alternative approaches for a high-dimensional supervised classification (e. g. Bobrowski & Łukaszuk, 2011). 21 J.…”
Section: Resultsmentioning
confidence: 99%
“…It is true that numerous data mining methods suffer from the so-called curse of dimensionality. Also neural networks have not been adapted for high-dimensional data; it can be rather recommended to use alternative approaches for a high-dimensional supervised classification (e. g. Bobrowski & Łukaszuk, 2011). 21 J.…”
Section: Resultsmentioning
confidence: 99%
“…scheme. We obtained them on each record of the dataset using an algorithm in the mentioned family of random subsampling, our SVM ensemble, and Genet [75], an algorithm that wisely adapts the training of a single SVM in a semi-automatic mode. The picture shows that there are records for which the error rate is high for every methods we consider.…”
Section: Numerical Resultsmentioning
confidence: 99%
“…The feature subset with minimal CVE was selected as optimal and applied on data of all patients to determine receiver operating characteristic (ROC) curve and check classification accuracy. The details of RLS feature selection method were presented elsewhere .…”
Section: Methodsmentioning
confidence: 99%