2015
DOI: 10.1007/978-3-319-23540-0_11
|View full text |Cite
|
Sign up to set email alerts
|

Multivariate Cluster-Based Discretization for Bayesian Network Structure Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…But the expressive power shall be increased. In some sense, a ctdBN learning algorithm has already been provided in [18]. This algorithm raises some issues, notably the fact that computing discretizations conditionally to the nodes in the Markov blankets of each discretized node limits the ctdBNs that can be learnt.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…But the expressive power shall be increased. In some sense, a ctdBN learning algorithm has already been provided in [18]. This algorithm raises some issues, notably the fact that computing discretizations conditionally to the nodes in the Markov blankets of each discretized node limits the ctdBNs that can be learnt.…”
Section: Resultsmentioning
confidence: 99%
“…and ii) will the loss of information affect significantly the results of inference? A possible answer to the first question consists of exploiting "conditional truncated densities" [18]. The answer to the second question of course strongly depends on the discretization performed but, as we shall see, conditional truncated densities can limit the discrepancy between the exact a posteriori marginal density functions of the continuous random variables and the approximation they provide.…”
Section: Definition 3 (Discretization) a Discretization Of A Continumentioning
confidence: 99%
See 1 more Smart Citation
“…We thus propose to discretize in a three-level scale {OK, degraded, KO} each continuous criterion X τ (noting its discretized counterpart X τ d ). Considering the discretization step in the model itself is a usual practice when dealing with probabilistic graphical model (see for instance (Mabrouk et al, 2015)). We then propose in a similar way to deal with this discretization step in the form of a Gaussian mixture model where the parameters are learnt in an unsupervised way by Expectation-Maximization (EM) algorithm (Dempster et al, 1977).…”
Section: Discretizationmentioning
confidence: 99%
“…The grey area is the histogram of the V 120 rms values. The blue and orange lines are the cut points between the three states OK, degraded and KO, a https://www.probayes.com/ determined as proposed by Mabrouk et al (2015). Weakly critical vibrations are considered for this variable when V 120 rms < 429, or strongly critical when V 120 rms > 1735.…”
Section: Discretizationmentioning
confidence: 99%