2011 IEEE 11th International Conference on Data Mining 2011
DOI: 10.1109/icdm.2011.22
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Greedy Method for Unsupervised Feature Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 76 publications
(48 citation statements)
references
References 8 publications
0
48
0
Order By: Relevance
“…We included scanner strength, voxel size, patient age and gender as covariates in the analysis. The SVM-RFE method allows one to minimize redundant and extraneous features that could potentially degrade classifier performance (Farahat et al, 2011). SVM-RFE works backwards from the initial set of features and eliminates the least "useful" feature on each recursive pass.…”
Section: Machine Learning Methods and Analysismentioning
confidence: 99%
“…We included scanner strength, voxel size, patient age and gender as covariates in the analysis. The SVM-RFE method allows one to minimize redundant and extraneous features that could potentially degrade classifier performance (Farahat et al, 2011). SVM-RFE works backwards from the initial set of features and eliminates the least "useful" feature on each recursive pass.…”
Section: Machine Learning Methods and Analysismentioning
confidence: 99%
“…In connection to the unsupervised feature selection problem, a variant of the greedy algorithm presented in this paper has previously been proposed for the unsupervised feature selection problem [27,28] where it has shown superior performance to other state-of-the-art methods for unsupervised feature selection. However, the algorithm proposed by Farahat et al [27,28] is centralized and it cannot be easily extended to handle big data that are massively distributed across different machines.…”
Section: Comparison With Related Workmentioning
confidence: 98%
“…However, the algorithm proposed by Farahat et al [27,28] is centralized and it cannot be easily extended to handle big data that are massively distributed across different machines.…”
Section: Comparison With Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides, a number of methods that are analytically or computationally manageable in low dimensional space may become completely intractable when the number of features reaches thousands or even more [3]. Therefore, reducing the data dimension is an indispensable part for data mining and machine learning tasks [4].…”
Section: Introductionmentioning
confidence: 99%