Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2003
DOI: 10.1145/956750.956838
|View full text |Cite
|
Sign up to set email alerts
|

Time and sample efficient discovery of Markov blankets and direct causal relations

Abstract: Data Mining with Bayesian Network learning has two important characteristics: under broad conditions learned edges between variables correspond to causal influences, and second, for every variable T in the network a special subset (Markov Blanket) identifiable by the network is the minimal variable set required to predict T . However, all known algorithms learning a complete BN do not scale up beyond a few hundred variables. On the other hand, all known sound algorithms learning a local region of the network r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
207
0

Year Published

2005
2005
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 280 publications
(207 citation statements)
references
References 5 publications
0
207
0
Order By: Relevance
“…The Bayesian network learning algorithm presented in this paper is based on the local discovery algorithm called Max-Min Parents and Children (MMPC) (a version of MMPC was published in Tsamardinos, Aliferis and Statnikov, 2003c). The Max-Min part of the algorithm name refers to the heuristic the algorithm uses, while the parents and children part refers to its output.…”
Section: The Max-min Parents and Children Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…The Bayesian network learning algorithm presented in this paper is based on the local discovery algorithm called Max-Min Parents and Children (MMPC) (a version of MMPC was published in Tsamardinos, Aliferis and Statnikov, 2003c). The Max-Min part of the algorithm name refers to the heuristic the algorithm uses, while the parents and children part refers to its output.…”
Section: The Max-min Parents and Children Algorithmmentioning
confidence: 99%
“…If G, P and G , P are two faithful Bayesian networks (to the same distribution), then for any variable T , it is the case that PC (Verma & Pearl, 1990;Pearl & Verma, 1991;Tsamardinos, Aliferis & Statnikov, 2003c). Thus, the set of parents and children of T is unique among all Bayesian networks faithful to the same distribution and so we will drop the superscript and denote it simply as PC T .…”
Section: The Max-min Parents and Children Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Two major algorithms HITON_PC and MMPC for the discovery of PC(T ) were introduced by Aliferis et al [37] and Tsamardinos et al [38], respectively. The Max-Min Parents and Children (MMPC) algorithm discovers the parents or children of a target of interest using a two-phase scheme.…”
Section: Related Workmentioning
confidence: 99%
“…This strategy is also used by divide-and-conquer search techniques, such as the max-min Markov boundary algorithm (Tsamardinos et al 2003b), the parents and children based Markov boundary (PCMB) algorithm (Peña et al 2007), the breadth first search of Markov boundary algorithm introduced by (Fu and Desmarais 2007), and the algorithms included in the algorithmic framework called GLL (Aliferis et al 2010a).…”
Section: Introductionmentioning
confidence: 99%