2019
DOI: 10.1007/s11042-019-08159-y
|View full text |Cite
|
Sign up to set email alerts
|

Robust inner product regularized unsupervised feature selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(26 citation statements)
references
References 41 publications
0
24
0
Order By: Relevance
“…Many unsupervised feature selection methods, both similarity preserving (filter) [9,19] and embedded [6,8,10,14] methods, are based on input data alone and rarely take the advantage of the external sources of knowledge about feature group structures. The feature groups used by some feature selection methods are also formed with input data [15,18].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Many unsupervised feature selection methods, both similarity preserving (filter) [9,19] and embedded [6,8,10,14] methods, are based on input data alone and rarely take the advantage of the external sources of knowledge about feature group structures. The feature groups used by some feature selection methods are also formed with input data [15,18].…”
Section: Related Workmentioning
confidence: 99%
“…Number of groups is set to 0.04 of the original feature set based on the previous findings for MT dataset [3]. Baselines: As baselines, we use LS algorithm and Spectral Feature Selection SPEC [19] as similarity preserving methods and Multi Cluster Feature Selection (MCFS) [6], Robust Unsupervised Feature Selection (RUFS) [14] and HUFS as embedded methods. RUFS has proven high performance compared to many existing embedded methods and HUFS uses feature group information similar to our method.…”
Section: Feature Groupingmentioning
confidence: 99%
See 1 more Smart Citation
“…The use of the 2, 1-norm penalty on a coefficient matrix is convenient for feature selection, as it promotes row sparsity, and can be found in several other works [13], [14], [15], [16], [17], [18]. Some recent works propose the use of a regularized coefficient matrix to choose features that can reconstruct the entire data set well in a manner that is similar to the problem we tackle in this paper [18], [19].…”
Section: Locality and Cluster Structure Preservationmentioning
confidence: 99%
“…However, such heuristic based methods usually ignore the correlation among the features and redundancy may exist in the selected features. In recent years, different methods [26] [12] [24] [23] have been proposed to evaluate feature quality jointly. Linear projection based methods [26] [8] [4] [19] with sparsity-inducing L 2,1 norm have become prevalent among others.…”
Section: Related Workmentioning
confidence: 99%