2021
DOI: 10.1007/978-3-030-92632-8_15
|View full text |Cite
|
Sign up to set email alerts
|

The Improvement of Attribute Reduction Algorithm Based on Information Gain Ratio in Rough Set Theory

Abstract: Due to the various data sets and the cumbersome and diverse data types, there must be many redundant attributes in them, which greatly increases the classification time in the background of rough set theory. In this paper, we improve the attribute reduction algorithm by information gain ratio. The data sets obtained after the attribute reduction of this method are used for classification, and the data sets are directly used for classification and comparison with other common classification methods. Experimenta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…There are two general feature selection strategies: wrappers [36] and filters [35]. While the wrapper strategy employs learning algorithms to evaluate selected attribute subsets, the Filter strategy selects attributes based on some measures such as information gain [24,26,27,28,29,33,34,39,40], consistency [1,23,25,30,47], distance [8,9,35,42,43], and dependency [27, 41,38,46]. These measures can be classified into distance and positive regions [30].…”
Section: Introductionmentioning
confidence: 99%
“…There are two general feature selection strategies: wrappers [36] and filters [35]. While the wrapper strategy employs learning algorithms to evaluate selected attribute subsets, the Filter strategy selects attributes based on some measures such as information gain [24,26,27,28,29,33,34,39,40], consistency [1,23,25,30,47], distance [8,9,35,42,43], and dependency [27, 41,38,46]. These measures can be classified into distance and positive regions [30].…”
Section: Introductionmentioning
confidence: 99%