2018
DOI: 10.1007/978-3-319-98566-4_10
|View full text |Cite
|
Sign up to set email alerts
|

Filter-Based Feature Selection Methods Using Hill Climbing Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…At any position in state space, the search only continues in the direction that optimizes the cost of function in the hopes of eventually discovering the best answer [89]. The study utilized the random hill climbing algorithm successfully [90], [91], [92], [93].…”
Section: ) Random Hill Climbingmentioning
confidence: 99%
“…At any position in state space, the search only continues in the direction that optimizes the cost of function in the hopes of eventually discovering the best answer [89]. The study utilized the random hill climbing algorithm successfully [90], [91], [92], [93].…”
Section: ) Random Hill Climbingmentioning
confidence: 99%
“…Randomized search strategy starts by randomly selecting the features and then proceeds with two different search strategies. The first uses the classical sequential or bidirectional search, e.g., simulated annealing (Ezugwu et al 2017) and random hill-climbing (Goswami et al 2019). The second uses strategies that have no regular movements, e.g., genetic algorithm (GA) (Babatunde et al 2014), and Tabu search (Zhang & Sun, 2002) The second strategies can escape local optima in the search space, but they have a greater chance of producing incorrect results due to non-availability of mechanism to capture relationship between features.…”
Section: Global Optimal Feature Selectionmentioning
confidence: 99%
“…Findings in the literature reveal that FS provides several benefits in modeling and analysis. With fewer features than the original configuration, models are much easier to interpret [24]. Similarly, with fewer features, the models become less expensive to train due to less memory and computation cost [25].…”
Section: Related Workmentioning
confidence: 99%