Proceedings of the 23rd International Conference on Machine Learning - ICML '06 2006
DOI: 10.1145/1143844.1143957
|View full text |Cite
|
Sign up to set email alerts
|

Full Bayesian network classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0
1

Year Published

2007
2007
2018
2018

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 58 publications
(37 citation statements)
references
References 10 publications
0
36
0
1
Order By: Relevance
“…We now demonstrate that it is easy to incorporate this consideration into our optimization program in (23).…”
Section: E Assigning Costs To the Selection Of Edgesmentioning
confidence: 93%
See 1 more Smart Citation
“…We now demonstrate that it is easy to incorporate this consideration into our optimization program in (23).…”
Section: E Assigning Costs To the Selection Of Edgesmentioning
confidence: 93%
“…However, estimating the model parameters via maximum-likelihood is complicated because the learned structures are loopy. Su and Zhang [23] suggested representing variable independencies by conditional probability tables (CPT) instead of the structures of graphical models. Boosting has been used in Rosset and Segal [24] for density estimation and learning Bayesian networks, but the objective was on modeling and not on classification.…”
Section: B Related Workmentioning
confidence: 99%
“…Finally, in FBN, all variables are dependent. In this study, we used the algorithm proposed in [14] to build FBNs. Attributes in this algorithm are assumed to be dependent on each other, and attribute independence is captured in CPTs (conditional probability tables) learned from decision trees.…”
Section: Methodsmentioning
confidence: 99%
“…Thus, heuristic and approximate learning algorithms are the realistic solution. A variety of learning algorithms have been proposed 26 . Moreover, it has been observed that learning an unrestricted Bayesian network classifier seems to not necessarily lead to a classifier with good performance.…”
Section: Introductionmentioning
confidence: 99%