2015
DOI: 10.1016/j.knosys.2015.06.008
|View full text |Cite
|
Sign up to set email alerts
|

Selecting feature subset with sparsity and low redundancy for unsupervised learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 49 publications
(16 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…For example, Laplacian Score (LS) [11], Spectral Feature Selection (SPEC) [12], and Multi-Cluster Feature Selection (MCFS) [8] are three well-known algorithms. Meanwhile, some researchers have shown that the manifold information of the data is distributed not only in the data space but also in the feature space [37][38][39][40]. Therefore, the feature manifold also contains the underlying geometric structure information, which is beneficial for feature selection.…”
Section: A Jimentioning
confidence: 99%
See 3 more Smart Citations
“…For example, Laplacian Score (LS) [11], Spectral Feature Selection (SPEC) [12], and Multi-Cluster Feature Selection (MCFS) [8] are three well-known algorithms. Meanwhile, some researchers have shown that the manifold information of the data is distributed not only in the data space but also in the feature space [37][38][39][40]. Therefore, the feature manifold also contains the underlying geometric structure information, which is beneficial for feature selection.…”
Section: A Jimentioning
confidence: 99%
“…Therefore, the feature manifold also contains the underlying geometric structure information, which is beneficial for feature selection. Inspired by [37][38][39][40], we incorporate the local structure information of the feature space of the data into our algorithm to address the first shortcoming of Eq. (1).…”
Section: A Jimentioning
confidence: 99%
See 2 more Smart Citations
“…By choosing a small subset of the most informative features that ideally is necessary and sufficient to describe the target concept [10], feature selection is capable of solving data mining and pattern recognition problems with data sets involving large number of features. Some have attempted to explore the advantages of feature selection for credit risk evaluation for P2P lending.…”
Section: Introductionmentioning
confidence: 99%