2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2018
DOI: 10.1109/smc.2018.00237
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Feature Selection through Fitness Proportionate Sharing Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…The experiment involves a comparative analysis of the LTV feature selection with several prominent feature selection methods to assess its ability to appropriately rank the Iris dataset's features. For this purpose, Relief-F [26], unsupervised feature saliency (UFSA) [27], Laplacian Score [28], unsupervised feature selection through fitness proportionate sharing clustering (UFSFPS) [29], locally linear embedding (LLE) [30] and radial basis function neural networks (RBFNN) [31] were used. Table 2 shows how the proposed and other methods ranked the features in the Iris dataset.…”
Section: Experimental Results Using the Iris Datasetmentioning
confidence: 99%
“…The experiment involves a comparative analysis of the LTV feature selection with several prominent feature selection methods to assess its ability to appropriately rank the Iris dataset's features. For this purpose, Relief-F [26], unsupervised feature saliency (UFSA) [27], Laplacian Score [28], unsupervised feature selection through fitness proportionate sharing clustering (UFSFPS) [29], locally linear embedding (LLE) [30] and radial basis function neural networks (RBFNN) [31] were used. Table 2 shows how the proposed and other methods ranked the features in the Iris dataset.…”
Section: Experimental Results Using the Iris Datasetmentioning
confidence: 99%
“…The classification accuracy (CA) on the considered datasets is computed by using Decision Tree, KNN, and MLP which are tabulated in TABLE 3, TABLE 4, and TABLE 5 respectively. For comparison, eight state-of-the-art feature selection methods have been considered whose results are taken from [20][21][22]. In each table, there are 12 columns in which first column corresponds to the dataset name, columns 2 to10 (M1 to M9) present the classification accuracies attained by considered methods, column 11 indicates the minimum percentage of selected features by any of the compared methods (M1 to M8), and column 12 contains the percentage of selected features by the proposed method.…”
Section: Datasetmentioning
confidence: 99%
“…In literature, a substantial amount of work on unsupervised feature selection method can be found [20][21][22]. In [23], a method is proposed that partitions the considered feature set into distinct clusters in such a way that features in a cluster are highly similar, while features in different clusters are quite dissimilar.…”
Section: Introductionmentioning
confidence: 99%
“…Most of these studies in literature focused on rigorous data analysis that involves feature selection [10], [11], and statistical feature formation techniques. These processes are time-consuming, and some features may only exist in some vehicles, which limits the practicability of the work.…”
Section: Related Workmentioning
confidence: 99%