2019
DOI: 10.48550/arxiv.1910.13233
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Density Estimation and Likelihood-free Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 0 publications
0
22
0
Order By: Relevance
“…We use different sufficient statistics which may have varying magnitudes. As recommended in [11] we run 1000 simulations and save the mean and standard deviations to normalize the simulator outputs for the SBI inference modules.…”
Section: Parameter Identification By Sbi Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We use different sufficient statistics which may have varying magnitudes. As recommended in [11] we run 1000 simulations and save the mean and standard deviations to normalize the simulator outputs for the SBI inference modules.…”
Section: Parameter Identification By Sbi Methodsmentioning
confidence: 99%
“…The likelihood P(x|θ ) is approximated from the simulated data by computing the probability of the simulated outputs Pr( x − x 0 < ε) around the vicinity of the observed data x 0 . Since the chance of hitting in the probability ball defined in the high dimensional probability function is low, the likelihood-free methods usually give less accurate results when compared to the MCMC methods where the likelihood can be evaluated ( [11], [23]).…”
Section: A General Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…The amount of simulations required to perform density estimation grows exponentially as data dimensionality increases, and this is generally referred to as the curse of dimensionality in machine learning. Papamakarios (2019) illustrate this problem faced by density estimation using a simple example. Due to the overall advantages of SNL over SNRE, we use the former along with FPCA dimensionality reduction.…”
Section: Likelihood-free Inference Of Abundancesmentioning
confidence: 99%
“…The main idea behind any ABC method is to model the posterior distribution by approximating the likelihood as a fraction of accepted simulated data points from the simulator model, by the use of a distance measure δ and a tolerance value . The first approach, known as the ABC-rejection scheme, was successfully applied in biology [Pritchard et al, 1999, Tavaré et al, 1997 and since then many alternative versions of the algorithm have been introduced, with the three main groups represented by Markov Chain Monte Carlo ABC [Marjoram et al, 2003], Sequential Monte Carlo (SMC) ABC [Beaumont et al, 2009], and neural-network-based ABC [Papamakarios, 2019. Here, we focus on the MCMC-ABC version [Andrieu et al, 2009] as it can be more readily implemented and the computational costs are lower [Jasra et al, 2007].…”
Section: Introductionmentioning
confidence: 99%