2021
DOI: 10.48550/arxiv.2107.03920
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Likelihood-Free Frequentist Inference: Confidence Sets with Correct Conditional Coverage

Abstract: Many areas of science make extensive use of computer simulators that implicitly encode likelihood functions of complex systems. Classical statistical methods are poorly suited for these so-called likelihood-free inference (LFI) settings, outside the asymptotic and lowdimensional regimes. Although new machine learning methods, such as normalizing flows, have revolutionized the sample efficiency and capacity of LFI methods, it remains an open question whether they produce reliable measures of uncertainty.This pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(13 citation statements)
references
References 52 publications
0
13
0
Order By: Relevance
“…From a Bayesian perspective, Rozet and Louppe (2021) propose using the focal and the peripheral losses to weigh down easily classified samples as a means to tune the conservativeness of a posterior estimator. Dalmasso et al (2021) consider the frequentist setting and introduce a theoreticallygrounded algorithm for the construction of confidence intervals that are guaranteed to have perfect coverage, regardless of the quality of the used statistic. Second, in light of our results that ensembles produce more conservative posteriors, model averaging constitutes another promising direction of study, as a simple and efficient method to produce reliable posterior estimators.…”
Section: Discussionmentioning
confidence: 99%
“…From a Bayesian perspective, Rozet and Louppe (2021) propose using the focal and the peripheral losses to weigh down easily classified samples as a means to tune the conservativeness of a posterior estimator. Dalmasso et al (2021) consider the frequentist setting and introduce a theoreticallygrounded algorithm for the construction of confidence intervals that are guaranteed to have perfect coverage, regardless of the quality of the used statistic. Second, in light of our results that ensembles produce more conservative posteriors, model averaging constitutes another promising direction of study, as a simple and efficient method to produce reliable posterior estimators.…”
Section: Discussionmentioning
confidence: 99%
“…WALDO expands on the framework formalized in [16], which consists of a modular procedure to (i) estimate a likelihood-based test statistic via odds ratios, (ii) estimate critical values C θ0,α across the parameter space via quantile regression, and (iii) check that the constructed confidence sets achieve the desired coverage level for all θ ∈ Θ. Here, we replace (i) and instead use posteriors or point predictions to compute τ WALDO in (4).…”
Section: Methodsmentioning
confidence: 99%
“…(iii) Coverage diagnostics. To check that the constructed confidence sets indeed achieve the desired level of conditional coverage, we leverage the diagnostics procedure introduced in [16]. For a third simulated train set T = {(θ (j) , D (j) )} B j=1 , we construct a confidence region for each D (j) ∈ T and then regress 1{θ (j) ∈ R(D (j) )} against θ (j) adopting a suitable regression method.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations