2022
DOI: 10.1016/j.ins.2021.11.034
|View full text |Cite
|
Sign up to set email alerts
|

Granular cabin: An efficient solution to neighborhood learning in big data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 37 publications
(5 citation statements)
references
References 44 publications
0
5
0
Order By: Relevance
“…Four algorithms based on the Rough Set theory and its variants and extensions were tested. The analysis and comparison of results allowed us to understand the superiority of the approaches based on Fuzzy Rough Sets for the interpretation of the phenomenon even if further experiments are needed, including, for example, those on big data which may require more efficient approaches such as the one reported in [ 28 ].…”
Section: Discussionmentioning
confidence: 99%
“…Four algorithms based on the Rough Set theory and its variants and extensions were tested. The analysis and comparison of results allowed us to understand the superiority of the approaches based on Fuzzy Rough Sets for the interpretation of the phenomenon even if further experiments are needed, including, for example, those on big data which may require more efficient approaches such as the one reported in [ 28 ].…”
Section: Discussionmentioning
confidence: 99%
“…So, we can see that SNFJE or SNFDI always achieves the best classification performance. Specifically, in Figure 2, for CART classification accuracies, SNFJE and SNFDI attain the maximal accuracies in 8/12 data sets (i.e., Data set 1, 2, 4, 5, 6, 8, 9, 12); for KNN classification accuracies, SNFJE and SNFDI attain the maximal accuracies in 8/12 data sets (i.e., Data set 1, 2, 4, 6, 7, 8, 9, 12); for SVM classification accuracies, SNFJE and SNFDI attain the maximal accuracies in 8/12 data sets (i.e., Data set 2, 3,4,7,8,9,12). It should be noticed that, although our proposed methods are sometimes defeated by ALL (i.e., all the original features), it is still evidently superiority to other attribute reduction algorithms.…”
Section: B Comparisons On Classificationmentioning
confidence: 99%
“…I N 1982, Professor Z. Pawlak coined the rough set theory which is generally acknowledged as an efficient and relatively new mathematical implement to process the incomplete, inaccurate, and undefinable data [1]- [3]. In rough set theory, no extra information is needed, thus it has gained huge attention from considerable number of researchers across a lot of fields, to name a few, artificial intelligence, decision making, machine learning, granular computing, data mining [4]- [6].…”
mentioning
confidence: 99%
“…The core of such a phase is to discriminate those features with inferior quality and remove them from the raw features. The specific process of one popular backward searching named backward greedy searching (BGS) [35,36] is as follows: (1) given a predefined constraint, each object in raw feature is evaluated by a measure, and those unqualified features are selected; (2) selected features from raw features are removed; (3) if the constraint is satisfied by the remaining features, the search process is terminated. • Random searching.…”
mentioning
confidence: 99%