2020
DOI: 10.1109/access.2020.2997013
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Attribute Reduction Method Based on Chi-Square Statistics and Information Entropy

Abstract: Attributes in datasets are usually not equally significant. Some attributes are unnecessary or redundant. Attribute reduction is an important research issue of rough set theory, which can find minimum subsets of attributes with the same classification effect as the whole dataset by removing unnecessary or redundant attributes. We use Chi-square statistics to evaluate the significance of condition attributes. It can reduce the search space of attribute reduction and improve the speed of attribute reduction. Con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…Figure 9 illustrates the estimation results of KDE. To examine the goodness-of-fit, the chi-square test is performed with the significant level set at 0.05 and the degree of freedom at 49 (Su et al, 2020). The test results of the EV and PV are 23.53 and 42.40 respectively, which are both below the critical value 66.34 and validate the accuracy of KDE.…”
Section: Kernel Density Estimation Resultsmentioning
confidence: 99%
“…Figure 9 illustrates the estimation results of KDE. To examine the goodness-of-fit, the chi-square test is performed with the significant level set at 0.05 and the degree of freedom at 49 (Su et al, 2020). The test results of the EV and PV are 23.53 and 42.40 respectively, which are both below the critical value 66.34 and validate the accuracy of KDE.…”
Section: Kernel Density Estimation Resultsmentioning
confidence: 99%