2020
DOI: 10.3390/mca25020037
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Bayesian Network Learning: A Bi-Objective Approach to Address the Bias-Variance Decomposition

Abstract: We present a novel bi-objective approach to address the data-driven learning problem of Bayesian networks. Both the log-likelihood and the complexity of each candidate Bayesian network are considered as objectives to be optimized by our proposed algorithm named Nondominated Sorting Genetic Algorithm for learning Bayesian networks (NS2BN) which is based on the well-known NSGA-II algorithm. The core idea is to reduce the implicit selection bias-variance decomposition while identifying a set of competitive models… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…which amounts to computations of function evaluation, constraint evaluation, Jacobian of objective function and the constraint, Hessian of the objective function and the constraint, and the backtracking line search of the step-size. Computing the hypervolume Hessian takes Θ((µn) 3 ) time in addition to the AD-computation of derivatives in Eq. (1).…”
Section: Computational Costmentioning
confidence: 99%
See 2 more Smart Citations
“…which amounts to computations of function evaluation, constraint evaluation, Jacobian of objective function and the constraint, Hessian of the objective function and the constraint, and the backtracking line search of the step-size. Computing the hypervolume Hessian takes Θ((µn) 3 ) time in addition to the AD-computation of derivatives in Eq. (1).…”
Section: Computational Costmentioning
confidence: 99%
“…For solving Eq. ( 15), we use Cholesky decomposition, which has a computational complexity of O((µ(n + p)) 3 ). It is certainly desired either to have an analytic expression of the HV Hessian or to exploit the block diagonal structure this matrix will certainly have for AD which we, however, have to leave for future research.…”
Section: Computational Costmentioning
confidence: 99%
See 1 more Smart Citation
“…Some of them apply local approximations with simpler but interpretable models [ 7 ], while others estimate feature importance using permutation techniques [ 8 ]; additionally, they search for influential samples that greatly affect model parameters [ 9 ]. Alternatively, there are methods applied while generating a model that aim to find a trade-off between model accuracy and complexity [ 10 ]. At this stage, optimal sampling techniques or model structures that lead to higher accuracy and lower complexity might be determined [ 11 , 12 ].…”
Section: Introductionmentioning
confidence: 99%
“…In many applications, we are faced with several conflicting and incommensurable objectives that have to be optimized concurrently (e.g., Reference [1][2][3][4][5]). As a general example, for the design of a certain product, two important objectives are, in most cases, the cost of the product (to be minimized) and its quality (to be maximized), among other possible goals.…”
Section: Introductionmentioning
confidence: 99%