2018
DOI: 10.1613/jair.1.11228
|View full text |Cite
|
Sign up to set email alerts
|

A Review of Inference Algorithms for Hybrid Bayesian Networks

Abstract: Hybrid Bayesian networks have received an increasing attention during the last years. The difference with respect to standard Bayesian networks is that they can host discrete and continuous variables simultaneously, which extends the applicability of the Bayesian network framework in general. However, this extra feature also comes at a cost: inference in these types of models is computationally more challenging and the underlying models and updating procedures may not even support closed-form solutions. In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 39 publications
(21 citation statements)
references
References 70 publications
0
21
0
Order By: Relevance
“…This constraint is quite natural in the LR problem, as all the benchmark datasets contain only continuous variables. In the future, we plan to adapt our method to also allow discrete predictive attributes, which, in general, means learning constrained graphical structures by limiting the number of dependencies allowed [ 30 ] or even dealing with hybrid Bayesian networks [ 40 ]. Learning PGMs with hidden variables is not an easy task, but there are several approaches in the literature, most based on the use of the Structural EM ( SEM ) algorithm [ 41 ].…”
Section: Gaussian Mixture-based Semi-naive Bayes For Label Rankingmentioning
confidence: 99%
“…This constraint is quite natural in the LR problem, as all the benchmark datasets contain only continuous variables. In the future, we plan to adapt our method to also allow discrete predictive attributes, which, in general, means learning constrained graphical structures by limiting the number of dependencies allowed [ 30 ] or even dealing with hybrid Bayesian networks [ 40 ]. Learning PGMs with hidden variables is not an easy task, but there are several approaches in the literature, most based on the use of the Structural EM ( SEM ) algorithm [ 41 ].…”
Section: Gaussian Mixture-based Semi-naive Bayes For Label Rankingmentioning
confidence: 99%
“…28,30 Due to this limitation, approximate inference is pursued by large BNs for fast and efficient estimation. 31 Stochastic sampling algorithms and search-based algorithms are often adopted for approximate solutions.…”
Section: Posterior Probability Inferencementioning
confidence: 99%
“…These have included using mixtures of polynomials (Shenoy, ; Shenoy & West, ) and mixtures of truncated basis functions (Langseth, Nielsen, Perez‐Bernabe, & Salmeron, ; Langseth, Nielsen, Rumi, & Salmeron, ; Perez‐Bernabe, Salmeron, & Langseth, ) to approximate the distributions of the data. However, to a large extent, many of these methods have focused on parameter learning of the network (Salmeron, Rumi, Langseth, Nielsen, & Madsen, ). Other methods for structure learning of hybrid Bayesian networks are still under development (Karra & Mili, ; Talvitie, Eggeling, & Koivisto, ).…”
Section: Other Tools For Hybrid Bayesian Network Modelingmentioning
confidence: 99%