2019
DOI: 10.3390/app9040679
|View full text |Cite
|
Sign up to set email alerts
|

Combining Fuzzy C-Means Clustering with Fuzzy Rough Feature Selection

Abstract: With the rapid development of the network, data fusion becomes an important research hotspot. Large amounts of data need to be preprocessed in data fusion; in practice, the features of datasets can be filtered to reduce the amount of data. The feature selection based on fuzzy rough sets can process a large number of continuous and discrete data to reduce the data dimension, making the selected feature subset highly correlated with the classification but less dependent on other features. In this paper, a new me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 45 publications
1
6
0
Order By: Relevance
“…As for the low dimension dataset, it tends to be difficult to determine the core of the analyzed data. These results are consistent with research in the same field [21]. Data with low dimensions will make the computation load lower and the computation time can be increased significantly.…”
Section: Resultssupporting
confidence: 90%
See 1 more Smart Citation
“…As for the low dimension dataset, it tends to be difficult to determine the core of the analyzed data. These results are consistent with research in the same field [21]. Data with low dimensions will make the computation load lower and the computation time can be increased significantly.…”
Section: Resultssupporting
confidence: 90%
“…R. Zhao, L. Gu, dan X. Zhu also did research in the same field as this research. Their research resulted in the combination of the C-Means Clustering and Reduct functioned as Rough Set Feature Selection which able to improve the accuracy with a value averaged in 1% [21]. The addition of the Core process applied in this research guaranteed that the result of the dimension reduction was only acquired from the core of the dataset only.…”
Section: Fig 6 Average Puritymentioning
confidence: 98%
“…Also we compare our model with FRFS (Fuzzy Rough Feature Selection) [22], T-FRFS (Threshold Fuzzy Rough Feature Selection) [23] and C-FRFS (C-Means Fuzzy Rough Feature Selection) [24] algorithms on iris and Wine UCI machine learning repository datasets in term of Accuracy and reduct size. The result shows that our algorithm is better than the others in term of Accuracy with this reduct size.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Our algorithm using Genetic algorithm technique to generate another new fuzzy rule from the initial rules then calculates their accuracy again which will be greater than the old rules before using genetic algorithm. The proposed model is applied on the Iris and Wine datasets and the results compared with other models: Preselection with niches [16], NSGA-II (Non dominated sorting genetic algorithm II) [17,18], ENORA (Evolutionary Non dominated sorting with Radial slots) [19,20], AP-NSGA-II (Average-Point dominated sorting genetic algorithm II) [21], FRFS (Fuzzy Rough Feature Selection) [22], T-FRFS (Threshold Fuzzy Rough Feature Selection) [23] and C-FRFS (C-Means Fuzzy Rough Feature Selection) [24] algorithms in term of number of fuzzy sets (L) and classification rate for evaluating the accuracy of training and test instances (CR) to show its validity.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, the resulting data sets experience a bias and error from the measurement process as well as from the nature of the biological entity being studied (Kihm et al 2018;William et al 2019). This problem raises new opportunities for data processing using an algorithmic approach rather than the usual statistical methods (Wang 2006;Zhao et al 2019). The algorithm must provide information on the differences, categories and treatments associated with the measurement data sets.…”
Section: Introductionmentioning
confidence: 99%