2013
DOI: 10.1016/j.ijar.2012.09.004
|View full text |Cite
|
Sign up to set email alerts
|

Scaling up the Greedy Equivalence Search algorithm by constraining the search space of equivalence classes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
10
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…For isolation of engine fault classes with a data that has no labels assigned to, the unsupervised learning-based BBN might thus be preferable. Different algorithms exist for this purpose such as the score-based, constraint-based, and a hybrid of these two [194].…”
Section: Bayesian Belief Networkmentioning
confidence: 99%
“…For isolation of engine fault classes with a data that has no labels assigned to, the unsupervised learning-based BBN might thus be preferable. Different algorithms exist for this purpose such as the score-based, constraint-based, and a hybrid of these two [194].…”
Section: Bayesian Belief Networkmentioning
confidence: 99%
“…A hybrid method combines a score-based method either with a constraintbased method or with a variable selection method. Such methods often use a greedy search on a restricted search space in order to achieve computational efficiency, where the restricted space is estimated using conditional independence tests or variable selection methods [Tsamardinos et al, 2006;Schmidt et al, 2007;Schulte et al, 2010;Alonso-Barba et al, 2013]. Common choices for the restricted search space are an estimated skeleton of the CPDAG (CPDAG-skeleton) or an estimated conditional independence graph (CIG).…”
mentioning
confidence: 99%
“…The CIG is a supergraph of the CPDAG-skeleton. Hybrid algorithms generally scale well with respect to the number of variables, but their consistency results are generally lacking even in the classical setting, except for Alonso- Barba et al [2013].…”
mentioning
confidence: 99%
“…For this purpose, we have generated a set of random BNs with different degrees of difficulty. 12 A summary of the properties of all these networks can be found in Table I. For each database, we have generated five samples of 5000 instances each.…”
Section: Methodsmentioning
confidence: 99%
“…Although these networks offer a high diversity with respect to dimensionality, ranging from 20 to 724 variables, we wished to extend our experiments by adding some big networks because solving high‐dimensional problems is the target of the scalable algorithms presented in this paper. For this purpose, we have generated a set of random BNs with different degrees of difficulty . A summary of the properties of all these networks can be found in Table .…”
Section: Experimental Evaluationmentioning
confidence: 99%