2013
DOI: 10.1109/tkde.2011.222
|View full text |Cite
|
Sign up to set email alerts
|

On Similarity Preserving Feature Selection

Abstract: In the literature of feature selection, different criteria have been proposed to evaluate the goodness of features. In our investigation, we notice that a number of existing selection criteria implicitly select features that preserve sample similarity, and can be unified under a common framework. We further point out that any feature selection criteria covered by this framework cannot handle redundant features, a common drawback of these criteria. Motivated by these observations, we propose a new "Similarity P… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
120
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 290 publications
(120 citation statements)
references
References 40 publications
0
120
0
Order By: Relevance
“…The last term is the diagonal vector of hyperedge weight W H and enjoys the sparse property, i.e., the weights of useless hyperedges will be set to 0. To be more speciÞc, the Þrst term aims to select k (k<d )features,basedon which best preserves the sample similarity as speciÞed by a predeÞned similarity matrix K.H e r e ,K is constructed using the Fisher Kernel in supervised learning [2] and by a Gaussian Kernel in unsupervised learning. However,…”
Section: Proposed Framework For Feature Selectionmentioning
confidence: 99%
See 2 more Smart Citations
“…The last term is the diagonal vector of hyperedge weight W H and enjoys the sparse property, i.e., the weights of useless hyperedges will be set to 0. To be more speciÞc, the Þrst term aims to select k (k<d )features,basedon which best preserves the sample similarity as speciÞed by a predeÞned similarity matrix K.H e r e ,K is constructed using the Fisher Kernel in supervised learning [2] and by a Gaussian Kernel in unsupervised learning. However,…”
Section: Proposed Framework For Feature Selectionmentioning
confidence: 99%
“…• SPFS [2]: The basic idea of SPFS is to pursue a transformation matrix, which 360 transform the high-dimensional data to a low-dimensional data, to maximally preserve the global similarity structure of original data.…”
Section: Experiments Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…We also notice that a number of existing feature selection criteria implicitly select features that preserve sample relationship, which can be inferred from either a predefined distance metric or label information [25]. This indicates that it would be beneficial to incorporate sample relationship into the feature selection algorithm.…”
Section: Ig(x Y) = H(x) − H(x|y) = H(y) − H(y|x)mentioning
confidence: 99%
“…Unsupervised feature selection focus on a target concept rather than on class labels, where observations that are close to each other in the feature space should belong to the same target concept [41,42]. Some approaches that follow this principle using spectral graph theory are SPEC [41] and Laplacian Score [22].…”
Section: Sequential Forward Selection (Sfs) and Sequential Backward Ementioning
confidence: 99%