2020
DOI: 10.1007/s13042-020-01089-4
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge granularity based incremental attribute reduction for incomplete decision systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(3 citation statements)
references
References 58 publications
0
3
0
Order By: Relevance
“…The authors also introduced incremental mechanisms to define knowledge granularity. Based on thesemechanisms,theauthorsdevelopedtwoincrementalalgorithms:KGIRA-MandKGIRD-Mto updatereductwhenaddinganddeletingmultipleobjectsrespectively.Consideringtotimeexecution and classification accuracy, the experimental results in (Zhang, Dai & Chen, 2020) showed that themultiple-objectalgorithmsKGIRA-MandKGIRD-Maremoreefficientthanthesingle-object algorithmsKGIRAandKGIRD(Zhang&Dai,2019)respectively.Experimentalresultsshowthat the execution time of incremental algorithms is much smaller than non-incremental algorithms. However,theyareallfilteralgorithms.Asshownabove,thereductoffilteralgorithmsisnotoptimal inthecardinalityofreductandclassificationaccuracy.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The authors also introduced incremental mechanisms to define knowledge granularity. Based on thesemechanisms,theauthorsdevelopedtwoincrementalalgorithms:KGIRA-MandKGIRD-Mto updatereductwhenaddinganddeletingmultipleobjectsrespectively.Consideringtotimeexecution and classification accuracy, the experimental results in (Zhang, Dai & Chen, 2020) showed that themultiple-objectalgorithmsKGIRA-MandKGIRD-Maremoreefficientthanthesingle-object algorithmsKGIRAandKGIRD(Zhang&Dai,2019)respectively.Experimentalresultsshowthat the execution time of incremental algorithms is much smaller than non-incremental algorithms. However,theyareallfilteralgorithms.Asshownabove,thereductoffilteralgorithmsisnotoptimal inthecardinalityofreductandclassificationaccuracy.…”
Section: Literature Reviewmentioning
confidence: 99%
“…However, these information may contain a great quantity of redundancy, noise, or even missing feature values [6][7][8]. Nowadays, how to deal with missing values, reduce redundant features, and simplify the complexity of the clas-sification model, so as to improve the generalization ability of model classification is a huge challenge we are facing [9][10][11][12][13][14][15]. As an important step of data preprocessing, feature selection based on granular computing has been widely used in knowledge discovery, data mining, machine learning, and other fields [16][17][18][19][20][21][22][23].…”
Section: Introductionmentioning
confidence: 99%
“…Dai et al [14] introduced a new form of conditional entropy to measure the importance of attributes in incomplete decision systems. Zhang et al [15] proposed incremental attribute reduction approaches for incomplete decision systems based on knowledge granularity. It is worth noting that, in the above studies, the missing values in the incomplete information systems are from conditional attributes.…”
Section: Introductionmentioning
confidence: 99%