2019
DOI: 10.1002/sam.11430
|View full text |Cite
|
Sign up to set email alerts
|

An empirical Bayes approach for learning directed acyclic graph using MCMC algorithm

Abstract: One hypothetically well‐founded approach for learning a Directed Acyclic Graph (DAG) is to utilize the Markov Chain Monte Carlo (MCMC) techniques. In the MCMC, the uniform noninformative priors on all of the possible graphs are considered. This brings about computational costs, making them impractical for learning the structure of DAGs with numerous variables. In this paper, we focus on the discrete variables and use the data information to restrict the space of possible graphs. This approach can be interprete… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Further the hazard function is extended as hfalse(tifalse)=ϕitifalse|bip,Yip,βph0ptiexpβpYip+bipwhere bipNfalse(0,false). We have supposed that all unknown parameters in CR model are independent and follow a Poisson distribution. This analysis has been performed through OpenBUGS software [20–22].…”
Section: Methodsmentioning
confidence: 99%
“…Further the hazard function is extended as hfalse(tifalse)=ϕitifalse|bip,Yip,βph0ptiexpβpYip+bipwhere bipNfalse(0,false). We have supposed that all unknown parameters in CR model are independent and follow a Poisson distribution. This analysis has been performed through OpenBUGS software [20–22].…”
Section: Methodsmentioning
confidence: 99%
“…Firstly, as can be seen in Figures (3), and (4), we should determine one variable in accordance with the start of a learning algorithm. In previous pieces of literature not only there only assumed variables following the normal assumptions but also they start with an initial number 1 meaning that their algorithms start with the first random variable such as [20,14].…”
Section: Learning Dagsmentioning
confidence: 99%
“…The problem has remained as a gap in literature until providing some investigation in [1,3]. Another applicable strategy for learning DAG, called Markov Chain Monte Carlo (MCMC) procedures has been deeply discussed in [14,12,8,11,7,5]. For all the above methods, the ordering variables does not consider and it is clear that this issue can be regarded in many real world application disciplines like genetics, finance and so on.…”
Section: Introductionmentioning
confidence: 99%