2014
DOI: 10.1145/2576868
|View full text |Cite
|
Sign up to set email alerts
|

Discrete Bayesian Network Classifiers

Abstract: We have had to wait over 30 years since the naive Bayes model was first introduced in 1960 for the so-called Bayesian network classifiers to resurge. Based on Bayesian networks, these classifiers have many strengths, like model interpretability, accommodation to complex data and classification problem settings, existence of efficient algorithms for learning and classification tasks, and successful applicability in real-world problems. In this article, we survey the whole set of discrete Bayesian network classi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
150
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 214 publications
(150 citation statements)
references
References 155 publications
0
150
0
Order By: Relevance
“…This allows us to directly model the conditional probabilities p(EC B |X, Y ) and p(EC S |W, Z). However, this is only tractable if very few observed variables are considered, if the number of observed variables were to increase, then alternatives should be explored in order to reduce the number of parameters in the model, for instance by using BN classifiers [13].…”
mentioning
confidence: 99%
“…This allows us to directly model the conditional probabilities p(EC B |X, Y ) and p(EC S |W, Z). However, this is only tractable if very few observed variables are considered, if the number of observed variables were to increase, then alternatives should be explored in order to reduce the number of parameters in the model, for instance by using BN classifiers [13].…”
mentioning
confidence: 99%
“…The datasets from the UCI Machine Learning repository 3 [21] were discretized using the LUCS-KDDN software. 4 Unlike algorithms such as K2 or CBL, no assumptions were made concerning the ordering of the features within the dataset. Datasets with missing values or continuous values were not considered, because we are interested in testing the Widened learning process and not the robustness of the algorithm to various data types.…”
Section: Resultsmentioning
confidence: 99%
“…An excellent survey can be found in [4]. In [14], Friedman et al describe Tree Augmented Naïve Bayes Network (TAN) where edges are added between child nodes of a Naïve Bayes network in a greedy search using the MDL scoring function, and whose structure is limited to that of a tree.…”
Section: Related Workmentioning
confidence: 99%
“…Generative learning [5][6][7][8] approximates the joint probability P(c, x) with different factorizations according to Bayesian network classifiers, which are powerful tools for knowledge representation and inference under conditions of uncertainty. Naive Bayes (NB) [9], which is the simplest kind of Bayesian network classifier that assumes the attributes are independent given the class label, are surprisingly effective. After the discovery of NB, many state-of-the-art algorithms, for example, tree-augmented naive Bayes (TAN) [10] and a k-dependence Bayesian classifier (KDB) [11], are proposed to relax the independence assumption by allowing conditional dependence between attributes X i and X j , which is measured by conditional mutual information I(X i ; X j |C).…”
Section: Introductionmentioning
confidence: 99%