2019
DOI: 10.1016/j.ijar.2019.07.008
|View full text |Cite
|
Sign up to set email alerts
|

On the relative expressiveness of Bayesian and neural networks

Abstract: A neural network computes a function. A central property of neural networks is that they are "universal approximators:" for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions). In contrast, a Bayesian network is a model, but each of its queries can be viewed as computing a function. In this paper, we identify some key distinctions between the functions computed by neural networks and those by marginal Bayesi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(20 citation statements)
references
References 23 publications
0
20
0
Order By: Relevance
“…It is known that neural networks are universal approximators, which means that they can approximate any functions to an arbitrary small error. However, Choi et al [ 45 ] showed that the functions induced by BN queries are polynomials. To improve their queries to become universal approximators, they proposed a testing BN, which chooses a parameter value depending on a threshold instead of simply having a fixed parameter value.…”
Section: Discussionmentioning
confidence: 99%
“…It is known that neural networks are universal approximators, which means that they can approximate any functions to an arbitrary small error. However, Choi et al [ 45 ] showed that the functions induced by BN queries are polynomials. To improve their queries to become universal approximators, they proposed a testing BN, which chooses a parameter value depending on a threshold instead of simply having a fixed parameter value.…”
Section: Discussionmentioning
confidence: 99%
“…Methods for XAI have been developed including locally interpretable modelagnostic explanations for Bayesian predictive models (Peltola 2018) and for convolutional neural networks (Mishra, Sturm, and Dixon 2017), visualization techniques (Grad-CAM) for CNNs (Selvaraju et al 2017), or black box explanations through transparent approximations (Lakkaraju et al 2017). Hybrid models use explicit symbolic representations in conjunction with black-box techniques (Choi, Wang, and Darwiche 2019). XAI was also expanded beyond the classical domains.…”
Section: State-of-the-artmentioning
confidence: 99%
“…In contrast, (deep) density estimators can model uncertainty, but (marginal) inference is in general intractable. Indeed, probabilistic circuits such as sum-product networks (SPNs) [26] provide tractable inference, but unfortunately, they are generally not universal function approximators [4]. Therefore, we recently proposed conditional sum-product networks (CSPNs) [33] that can harness the expressive power of universal function approximators such as neural networks, while still maintaining a wide range of probabilistic inference routines.…”
Section: Introductionmentioning
confidence: 99%