2016
DOI: 10.1137/15m1029527
|View full text |Cite
|
Sign up to set email alerts
|

Fast Sampling in a Linear-Gaussian Inverse Problem

Abstract: We solve the inverse problem of deblurring a pixelized image of Jupiter using regularized deconvolution and by sample-based Bayesian inference. By efficiently sampling the marginal posterior distribution for hyperparameters, then the full conditional for the deblurred image, we find that we can evaluate the posterior mean faster than regularized inversion, when selection of the regularizing parameter is considered. To our knowledge, this is the first demonstration of sampling and inference that takes less comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
46
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 29 publications
(46 citation statements)
references
References 34 publications
(96 reference statements)
0
46
0
Order By: Relevance
“…Gibbs sampling [3] and low-rank independence sampling [8] have been proposed for this type of problem, but the statistical efficiency of the resulting MCMC chains produced by either approach decreases as the dimension of the state increases. We discuss this phenomenon in detail and consider a solution using marginalization-based methods [22,33,45]. Marginalization can be just as computationally prohibitive for sufficiently large-scale problems.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Gibbs sampling [3] and low-rank independence sampling [8] have been proposed for this type of problem, but the statistical efficiency of the resulting MCMC chains produced by either approach decreases as the dimension of the state increases. We discuss this phenomenon in detail and consider a solution using marginalization-based methods [22,33,45]. Marginalization can be just as computationally prohibitive for sufficiently large-scale problems.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, the Gibbs sampler of [3] for inverse problems is used in the context of spatio-temporal models in [31]. Some properties of this Gibbs sampler are derived in [1], and various extensions are presented in [8,22,33], which have improved convergence properties and/or improved computational efficiency.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…It would be possible to further optimize the complexity of the estimators in (13), (14) and (15) by a judicious choice of the TT accuracy ε, as well as the numbers of samples N 0 and N 1 , There is of course also scope for full multilevel estimators as in [23,7]. In particular, the values of N 0 and N 1 can be determined by an adaptive greedy procedure [27], which compares empirical variances and costs of the two levels and doubles N on the level that has the maximum profit.…”
Section: =1mentioning
confidence: 99%
“…In addition, we also ran all the experiments with h = 2 −6 and N = 2 14 fixed, varying only d to explicitly see the growth with d. The timings for these experiments are plotted using dashed lines. The cost for the ALS-Cross algorithm to buildũ h grows cubically in d, while the cost to build the TT surrogateπ and the cost of the TT-CD sampling procedure grow linearly with d. Since the evaluation of π is dominated by the cost of the PDE solve, its cost does not grow with dimension.…”
Section: Convergence Of the Expected Quantities Of Interestmentioning
confidence: 99%