2022
DOI: 10.32604/csse.2022.023249
|View full text |Cite
|
Sign up to set email alerts
|

Binary Representation of Polar Bear Algorithm for Feature Selection

Abstract: In most of the scientific research feature selection is a challenge for researcher. Selecting all available features is not an option as it usually complicates the research and leads to performance drop when dealing with large datasets. On the other hand, ignoring some features can compromise the data accuracy.Here the rough set theory presents a good technique to identify the redundant features which can be dismissed without losing any valuable information, however, exploring all possible combinations of feat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…Some researchers have extended Pawlak's idea by incorporating fuzzy equivalence relations, neighborhood relations and dominance relations into Pawlak rough sets to form neighborhood rough sets [3,4], fuzzy rough sets [5][6][7][8][9], and dominance-based rough sets [10][11][12]. The generalized models of rough set are commonly applied in the reduction of attributes [13][14][15], feature selection [16][17][18][19], extraction of rules [20][21][22][23], theory of decisions [24][25][26], incremental learning [27][28][29],…”
Section: Introductionmentioning
confidence: 99%
“…Some researchers have extended Pawlak's idea by incorporating fuzzy equivalence relations, neighborhood relations and dominance relations into Pawlak rough sets to form neighborhood rough sets [3,4], fuzzy rough sets [5][6][7][8][9], and dominance-based rough sets [10][11][12]. The generalized models of rough set are commonly applied in the reduction of attributes [13][14][15], feature selection [16][17][18][19], extraction of rules [20][21][22][23], theory of decisions [24][25][26], incremental learning [27][28][29],…”
Section: Introductionmentioning
confidence: 99%