2021
DOI: 10.1088/1361-6544/abbe62
|View full text |Cite
|
Sign up to set email alerts
|

Wasserstein stability estimates for covariance-preconditioned Fokker–Planck equations

Abstract: We study the convergence to equilibrium of the mean field PDE associated with the derivative-free methodologies for solving inverse problems that are presented by Garbuno-Inigo et al (2020 SIAM J. Appl. Dyn. Syst. 19 412-41), Herty and Visconti (2018 arXiv:1811.09387). We show stability estimates in the Euclidean Wasserstein distance for the mean field PDE by using optimal transport arguments. As a consequence, this recovers the convergence towards equilibrium estimates by Garbuno-Inigo et al (2020 SIAM J. App… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(18 citation statements)
references
References 24 publications
0
18
0
Order By: Relevance
“…This is in contrast with the sampling and inversion methods for inverse problems that are based on the ensemble Kalman filter, essentially because these methods are affine-invariant [21]: they behave similarly across the class of problems that differ only by an affine transformation. Ensemble Kalman methods can be viewed, at least in the case of a linear forward model, as coupled gradient descents dynamics or overdamped Langevin diffusions preconditioned by the covariance of the ensemble, providing good stability and convergence properties [20,10].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This is in contrast with the sampling and inversion methods for inverse problems that are based on the ensemble Kalman filter, essentially because these methods are affine-invariant [21]: they behave similarly across the class of problems that differ only by an affine transformation. Ensemble Kalman methods can be viewed, at least in the case of a linear forward model, as coupled gradient descents dynamics or overdamped Langevin diffusions preconditioned by the covariance of the ensemble, providing good stability and convergence properties [20,10].…”
Section: Resultsmentioning
confidence: 99%
“…EKI, EKS and ALDI, enjoy the property of being affine invariant in the sense of [22]; see also [43]. This property was studied carefully in [21] for EKS and ALDI, after it had been observed that, in the simple case of a linear forward model, the convergence rate of EKI and EKS was independent of the Hessian of the regularized least-squares functional Φ R [20,10]. As the terminology indicates, affine invariant methods are insensitive to affine transformations of the regularized least-squares functional Φ R , which makes them particularly well-suited in cases where the Φ R exhibits strong anisotropy at its minimizer.…”
mentioning
confidence: 99%
“…for any x, y ∈ R d ; see similar computations in [26,13]. The first identity can be checked directly, and the second identity follows e.g.…”
Section: Propagation Of Gaussiansmentioning
confidence: 87%
“…An alternative Kalman methodology (ensemble Kalman inversion -the EKI) for the optimization approach to the inverse problem, which involves iteration to infinity, was introduced and studied in [34,33] in discrete time and in [58,59] in continuous time; the idea of using ensemble methods for optimization rather than sampling was anticipated in [55]. The ensemble based optimization approach was generalized to approximate sampling of the Bayesian posterior solution to the inverse problem in [26] (the ensemble Kalman sampler -the EKS), and studied further in [13,27,49].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Gradient-free methods, which rely only on evaluations of the loss function, are therefore an attractive alternative to SGD methods in these settings. In this paper, we consider two particular classes of methods belonging to this category which received a lot of attention lately: the Consensus-Based Optimization (CBO) methods [47,11,14,12], reviewed recently in [53], and methods based on the Ensemble Kalman Filter (EnKF) [17,21,34,51,25,26,13], which are mainly employed in the context of inverse problems but have also proved useful for machine learning tasks [41]. It is shown in [20] that gradient-free ensemble Kalman methods perform better than their gradient-based counterparts in noisy likelihood landscapes.…”
Section: Introductionmentioning
confidence: 99%