2022
DOI: 10.21105/joss.04205
|View full text |Cite
|
Sign up to set email alerts
|

swyft: Truncated Marginal Neural Ratio Estimation in Python

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 21 publications
(21 citation statements)
references
References 35 publications
0
21
0
Order By: Relevance
“…The tradeoff is that the likelihood for our persistence diagrams is only implicitly defined. Parameter estimation in the context of implicit likelihoods is precisely within the purview of the rapidly advancing field of simulation-based inference ( [83][84][85][86], see [87] for a recent review and [88][89][90][91][92] for cosmological applications). Within simulation-based inference it has been advocated (see e.g.…”
Section: Discussionmentioning
confidence: 99%
“…The tradeoff is that the likelihood for our persistence diagrams is only implicitly defined. Parameter estimation in the context of implicit likelihoods is precisely within the purview of the rapidly advancing field of simulation-based inference ( [83][84][85][86], see [87] for a recent review and [88][89][90][91][92] for cosmological applications). Within simulation-based inference it has been advocated (see e.g.…”
Section: Discussionmentioning
confidence: 99%
“…In particular, it might be interesting to employ a graph convolutional neural network as suggested in [41], in order to preserve spatial relations between distant pixels, and rotational invariance. Finally, recent developments in the field of simulation based inference, such as our case, suggest that a promising approach to estimate the complex posterior distribution of the objective parameters are the so-called neural likelihood ratio estimation techniques [60], and in particular Truncated Marginal Neural Ratio Estimation [61][62][63]. Since these techniques are tailored for simulation based problems, we hypothesise that it would be possible to achieve similar performance to our current neural network architecture with an even simpler structure and possibly less training samples.…”
Section: Jcap09(2023)029mentioning
confidence: 99%
“…When allocated one node (32 CPU threads) on highperformance computing clusters, our current nested sampling infrastructure sees typical per-target timescales on the order of a week. As such, our future work will instead rely on the development of a simulation-based inference (SBI; Cranmer et al 2020) machine-learning infrastructure; these have seen great success in recent years (see Alsing et al 2018Alsing et al , 2019Miller et al 2020;Tejero-Cantero et al 2020;Miller et al 2022;Legin et al 2023b). The amortized nature of SBI will allow for computationally efficient deployment across parameter space in catalog-wide applications to current and future missions (Kepler,K2,TESS,PLATO,etc.…”
Section: Next Stepsmentioning
confidence: 99%