2005
DOI: 10.1007/s10994-005-0470-7
|View full text |Cite
|
Sign up to set email alerts
|

TAN Classifiers Based on Decomposable Distributions

Abstract: Abstract. In this paper we present several Bayesian algorithms for learning Tree Augmented Naive Bayes (TAN) models. We extend the results in Meila & Jaakkola (2000a) to TANs by proving that accepting a prior decomposable distribution over TAN's, we can compute the exact Bayesian model averaging over TAN structures and parameters in polynomial time. Furthermore, we prove that the k-maximum a posteriori (MAP) TAN structures can also be computed in polynomial time. We use these results to correct minor errors in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(26 citation statements)
references
References 11 publications
0
26
0
Order By: Relevance
“…Several search algorithms such as simulated annealing algorithms, genetic algorithms, and Three Augmented Naïve Bayes (TAN) algorithms (Cerquides and Mantaras, 2005) have been developed for this purpose (Wu, 2010;Hruschka and Ebecken, 2007;Baesens et al, 2004;). The knowledge-based approach, on the other hand, uses the causal knowledge of domain experts to construct networks.…”
Section: Fundamentals Of Bcnsmentioning
confidence: 99%
“…Several search algorithms such as simulated annealing algorithms, genetic algorithms, and Three Augmented Naïve Bayes (TAN) algorithms (Cerquides and Mantaras, 2005) have been developed for this purpose (Wu, 2010;Hruschka and Ebecken, 2007;Baesens et al, 2004;). The knowledge-based approach, on the other hand, uses the causal knowledge of domain experts to construct networks.…”
Section: Fundamentals Of Bcnsmentioning
confidence: 99%
“…Step 5: compute the weight of sample ‫ݏ݊ܫ‬ ∈ ‫ܦ‬ based on formula (4) Step 6: compute the weight of sample ‫ݏ݊ܫ‬ ∈ ‫ܦ‬ based on formula (5) Step 7: make ‫,݅‪ሺ‬ݐܦ‬ ݆ሻ = ‫ݐܦ‬ሺ݅ሻ…”
Section: Output: the Integrated Bayesian Classifiermentioning
confidence: 99%
“…However, it is not easy to generate the optimal Bayesian network in practice [2][3]. Therefore, the training of restricted Bayesian network classifiers has become a very active research field in recent years [4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%
“…The tables show that the average classification accuracy of EBN-20 is 71.56% which is better than the average accuracy of any of its 20 constituent classifiers, except for FTTAN-TAN and FTTAN-K2. The result not a surprising because TAN search and K2 search algorithms have exhibited excellent performance in data mining [25] [26] and the fine tuning process makes them even better. The degradation of EBN-20 average accuracy is probably because EBN-20 combines finetuned and non-fine-tuned classifiers, which reduces diversity, as the constituent classifiers are not very different classifiers.…”
Section: Stacking Bn Classifiers and Their Corresponding Fine-tunementioning
confidence: 99%