2012
DOI: 10.1016/j.patcog.2011.12.014
|View full text |Cite
|
Sign up to set email alerts
|

A hybrid discretization method for naïve Bayesian classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0
3

Year Published

2012
2012
2019
2019

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(27 citation statements)
references
References 11 publications
0
24
0
3
Order By: Relevance
“…The simplicity of Naïve Bayes classifiers also ensures computational efficiency (Almeida, Almeida, & Yamakami, 2011). Although the assumption of independence among features is more often than not violated in practical datasets, Naïve Bayesian generally gives comparable performance against much more sophisticated classifiers (Jin, Lu, & Ling, 2003;Rish, 2001;Wong, 2012).…”
Section: Naïve Bayesmentioning
confidence: 99%
“…The simplicity of Naïve Bayes classifiers also ensures computational efficiency (Almeida, Almeida, & Yamakami, 2011). Although the assumption of independence among features is more often than not violated in practical datasets, Naïve Bayesian generally gives comparable performance against much more sophisticated classifiers (Jin, Lu, & Ling, 2003;Rish, 2001;Wong, 2012).…”
Section: Naïve Bayesmentioning
confidence: 99%
“…They used the propositionalized dataset and PAT-Table generated by the PAT-learner to build naïve Bayes classifiers. [16] focused on the discretization of attributes to improve naïve Bayes classification. Wong proposed a hybrid method for continuous attributes, and mentioned that the discretization of continuous attributes in a dataset using different methods can improve the performance of naïve Bayes learning.…”
Section: Related Workmentioning
confidence: 99%
“…Wong proposed a hybrid method for continuous attributes, and mentioned that the discretization of continuous attributes in a dataset using different methods can improve the performance of naïve Bayes learning. Additionally, [16] provided a nonparametric measure to evaluate the level of dependence between a continuous attribute and the class.…”
Section: Related Workmentioning
confidence: 99%
“…In [9], [10] SNB (Selective Naïve Bayes) is used for filtering and ranking the attribute of medical dataset which gives the better performance. The prior of single attribute is measured by adding the new attribute with the existing set and comparing the accuracy of the current and previous sets.…”
Section: Existing Work 71 Feature Selection Algorithms On Medical Dmentioning
confidence: 99%