2012
DOI: 10.1016/j.ins.2012.02.042
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection using structural similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(11 citation statements)
references
References 23 publications
0
11
0
Order By: Relevance
“…Sanguinetti [39] presented a latent variable model to perform dimensionality reduction on a dataset that contained clusters; specifically, a variable was considered salient when it preserved clustered information by mapping an original representation space to a latent space. Mitra [32] proposed using structural similarity between clusters for feature selection, where the topological neighborhood information about pairs of instances was considered to assess the similarity. Furthermore, a feature selection method based on the wrapper model in semi-supervised feature selection, considering labeled and unlabeled examples, has also been described [50,53].…”
Section: Related Workmentioning
confidence: 99%
“…Sanguinetti [39] presented a latent variable model to perform dimensionality reduction on a dataset that contained clusters; specifically, a variable was considered salient when it preserved clustered information by mapping an original representation space to a latent space. Mitra [32] proposed using structural similarity between clusters for feature selection, where the topological neighborhood information about pairs of instances was considered to assess the similarity. Furthermore, a feature selection method based on the wrapper model in semi-supervised feature selection, considering labeled and unlabeled examples, has also been described [50,53].…”
Section: Related Workmentioning
confidence: 99%
“…An approach based on cooperative game theory evaluates the power of each feature individually and within groups [32]. The structural similarity between data before and after feature selection is maintained and topological neighborhood information is used for computing the structural similarity [33]. An unsupervised feature ranking algorithm can discover Bi-clusters that are used to evaluate feature inter-dependencies, separability of instances and feature ranking [23].…”
Section: A Feature Selectionmentioning
confidence: 99%
“…Since finding the best feature subset is in exponential space, the feature selection problem is intractable or NP-hard [2]. In order to overcome the intractable property of the feature selection problem, good search algorithms are required.…”
Section: Introductionmentioning
confidence: 99%