2006
DOI: 10.1007/11815921_15
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Class-Matched Multinet Classifier

Abstract: Abstract.A Bayesian multinet classifier allows a different set of independence assertions among variables in each of a set of local Bayesian networks composing the multinet. The structure of the local network is usually learned using a jointprobability-based score that is less specific to classification, i.e., classifiers based on structures providing high scores are not necessarily accurate. Moreover, this score is less discriminative for learning multinet classifiers because generally it is computed using on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
4
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 9 publications
1
4
0
Order By: Relevance
“…This has been proved for this database and several other BNCs. For example, the TPDA algorithm (Cheng, Bell, and Liu 1997), PC algorithm (Spirtes, Glymour, and Scheines 2000), Chow-Liu multinet (Friedman, Geiger, and Goldszmidt 1997), tree-augmented naive (TAN) Bayes (Friedman, Geiger, and Goldszmidt 1997), RAI algorithm (Yehezkel and Lerner 2009), and tBCM 2 (Gurwicz and Lerner 2006) achieved accuracies between 77.2% and 82.9% (Gurwicz and Lerner 2006), which are similar, though sometimes slightly superior, to those reported in the current study. Therefore, we believe that a classification accuracy of 80% to 83% for the cytogenetic database is about the best a BNC can get.…”
Section: Discussionsupporting
confidence: 81%
See 1 more Smart Citation
“…This has been proved for this database and several other BNCs. For example, the TPDA algorithm (Cheng, Bell, and Liu 1997), PC algorithm (Spirtes, Glymour, and Scheines 2000), Chow-Liu multinet (Friedman, Geiger, and Goldszmidt 1997), tree-augmented naive (TAN) Bayes (Friedman, Geiger, and Goldszmidt 1997), RAI algorithm (Yehezkel and Lerner 2009), and tBCM 2 (Gurwicz and Lerner 2006) achieved accuracies between 77.2% and 82.9% (Gurwicz and Lerner 2006), which are similar, though sometimes slightly superior, to those reported in the current study. Therefore, we believe that a classification accuracy of 80% to 83% for the cytogenetic database is about the best a BNC can get.…”
Section: Discussionsupporting
confidence: 81%
“…BNs that were originally used in knowledge representation and general probabilistic inference have recently been applied also to classification (Friedman, Geiger, and Goldszmidt 1997;Greiner et al 2005;Grossman and Domingos 2004;Gurwicz and Lerner 2006;Kontkanen et al 1999;Yehezkel and Lerner 2009). Without limiting the generality, we identify the class variable with the first variable X 1 ¼ C and define XnC and Pa i nC as the sets of graph nodes and parents of X i excluding C, respectively, to apply Eq.…”
Section: Investigation Of the K2 Algorithm 77mentioning
confidence: 99%
“…An extension to the Chow-Liu tree Multi-net was proposed in [30], which involves maximizing the cross-class divergence. The Bayesian class-matched multi-net algorithm [31] is another extension that uses a scoring function based on detection-rejection behavior. The recursive Bayesian Classifier induction [32] can build multiple local BNs for the same class by further dividing its data subset recursively.…”
Section: Bayesian Multi-net Classifiersmentioning
confidence: 99%
“…Solutions proposed to address this issue use the idea of Bayesian multinets (Geiger and Heckerman, 1996), which consist of several networks each associated with a subset of the domain of one variable, often called distinguished. Bayesian multinets have been used for classification in Friedman et al (1997), Gurwicz and Lerner (2006), Huang et al (2003) and Hussein and Santos (2004), among others.…”
Section: Introductionmentioning
confidence: 99%