2014
DOI: 10.1016/j.knosys.2013.10.016
|View full text |Cite
|
Sign up to set email alerts
|

Speeding up incremental wrapper feature subset selection with Naive Bayes classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
56
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 148 publications
(56 citation statements)
references
References 15 publications
0
56
0
Order By: Relevance
“…Figure 8b shows that the NB classifier selected features that perform more consistently throughout all positions. Because NB classifiers are known to be very sensitive to the presence of redundant and/or irrelevant attributes [5], they typically select more consistent features that perform well on average over all positions. For both classifiers, the sensors C and F are the best overall performing sensors.…”
Section: Evaluation Of Individual Orientations and Positionsmentioning
confidence: 99%
“…Figure 8b shows that the NB classifier selected features that perform more consistently throughout all positions. Because NB classifiers are known to be very sensitive to the presence of redundant and/or irrelevant attributes [5], they typically select more consistent features that perform well on average over all positions. For both classifiers, the sensors C and F are the best overall performing sensors.…”
Section: Evaluation Of Individual Orientations and Positionsmentioning
confidence: 99%
“…Many feature selection methods have been proposed, and they are usually classified into three classes: "filter" methods, "wrapper" methods and "embedded" methods [2,7,8,14]. Wu [19] proposed using the fuzzy and grey Delphi methods to identify a set of reliable attributes and, based on these attributes, transforming big data to a manageable scale to consider their impacts.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The concept of "optimal" can be illustrated by two aspects: (1) Elements of the set are highly related to the MCDM problem for decision purpose; (2) The set of attributes is parsimonious, and the selected alternative will be suboptimum if one of these attributes is omitted. The rationale of attribute selection is similar to feature selection in data mining field.…”
Section: Introductionmentioning
confidence: 99%
“…Similar to BN, NB embedded the concept of independence with Bayesian theorem. It employs the concept of conditional probability and successfully implemented for feature subset selection (Bermejo et al 2014). It is also combined with decision tree for multi-class classification task (Farid et al 2014).…”
Section: Related Workmentioning
confidence: 99%