2019
DOI: 10.48550/arxiv.1901.10230
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Partially Exchangeable Networks and Architectures for Learning Summary Statistics in Approximate Bayesian Computation

Abstract: We present a novel family of deep neural architectures, named partially exchangeable networks (PENs) that leverage probabilistic symmetries. By design, PENs are invariant to block-switch transformations, which characterize the partial exchangeability properties of conditionally Markovian processes. Moreover, we show that any block-switch invariant function has a PEN-like representation. The DeepSets architecture is a special case of PEN and we can therefore also target fully exchangeable data. We employ PENs t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(10 citation statements)
references
References 19 publications
0
10
0
Order By: Relevance
“…For the time series models, we used Partially Exchangeable Networks (PENs) [Wiqvist et al, 2019] to parametrize f w and s β . The key property of a r-PEN is that the transformation applied by the Neural Network is invariant to permutation of the input data which does not change the probability density of that data point, in the case in which the probability density is Markovian of order r (see Appendix D.3).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…For the time series models, we used Partially Exchangeable Networks (PENs) [Wiqvist et al, 2019] to parametrize f w and s β . The key property of a r-PEN is that the transformation applied by the Neural Network is invariant to permutation of the input data which does not change the probability density of that data point, in the case in which the probability density is Markovian of order r (see Appendix D.3).…”
Section: Resultsmentioning
confidence: 99%
“…As AR(2) is a 2-Markovian model, we used a 2-PEN for f w . MA(2) is instead not a Markovian model; however, Wiqvist et al [2019] argued that it can be considered as "almost" Markovian, so that the loss of information in imposing a PEN structure of high enough order is negligible. Following them, we therefore used a 10-PEN.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…There are also combinations of ABC and deep neural networks, for example learning summary statistics for ABC inference has been proposed in [17][18][19] and a method to generate globally sufficient summaries of arbitrary data has been developed in [20]. However, these approaches usually require on-the-fly simulations and do not use the generated summaries for parameter estimation.…”
mentioning
confidence: 99%