The ensemble Kalman inversion is widely used in practice to estimate unknown parameters from noisy measurement data. Its low computational costs, straightforward implementation, and non-intrusive nature makes the method appealing in various areas of application. We present a complete analysis of the ensemble Kalman inversion with perturbed observations for a fixed ensemble size when applied to linear inverse problems. The well-posedness and convergence results are based on the continuous time scaling limits of the method. The resulting coupled system of stochastic differential equations allows to derive estimates on the long-time behaviour and provides insights into the convergence properties of the ensemble Kalman inversion. We view the method as a derivative free optimization method for the least-squares misfit functional, which opens up the perspective to use the method in various areas of applications such as imaging, groundwater flow problems, biological problems as well as in the context of the training of neural networks.AMS classification scheme numbers: 65N21, 62F15, 65N75, 65C30, 90C56 for Hilbert spaces (H 1 , ·, · H 1 ), (H 2 , ·, · H 2 ) and z 1 ∈ H 1 , z 2 ∈ H 2 . The empirical means are given bythe minimum of n and the first exit time of e s at radius n. Then, for all n ∈ N, from (14) (after rebasing the integration interval from [0, t] to [s, s + t]) we obtainAs τ n → ∞, applying Fatou's lemma on the left hand side and applying the monotone convergence theorem on the right hand side givesProof. By Lemma Appendix A.4 we can directly take expectations in (14) to obtains 2 ds.Note that by dropping the non-negative mixed terms j = k and by using Jensen's and Young's inequalityProof. The idea of this proof is based on Theorem 4.6.2 in [33]. We define the stochastic Lyapunov functionThe generator applied to V fulfillsis monotonically decreasing.Proof. The assertions follow by arguments similar to the proof of Proposition 4.11.Thus, for all t, s ≥ 0, it follows similarly to the proof of Lemma 4.1 that,...,J converges to zero almost surely as t → ∞. Proof. We define the Lyapunov function V (r, t) = t β 1 J J j=1 |r (j) | 2 and obtain LV (r, t) ≤ βt β−1 J J j=1 |r (j) | 2 − t β 1 J J j=1 r (j) , C(r) + 1 t α + R B r (j) .Thus, LV (r, t) ≤ 1 J J j=1 |r (j) | 2 β − λ min t t α + R t β−1 .
The Ensemble Kalman methodology in an inverse problems setting can be viewed as an iterative scheme, which is a weakly tamed discretization scheme for a certain stochastic differential equation (SDE). Assuming a suitable approximation result, dynamical properties of the SDE can be rigorously pulled back via the discrete scheme to the original Ensemble Kalman inversion.The results of this paper make a step towards closing the gap of the missing approximation result by proving a strong convergence result in a simplified model of a scalar stochastic differential equation. We focus here on a toy model with similar properties than the one arising in the context of Ensemble Kalman filter. The proposed model can be interpreted as a single particle filter for a linear map and thus forms the basis for further analysis. The difficulty in the analysis arises from the formally derived limiting SDE with non-globally Lipschitz continuous nonlinearities both in the drift and in the diffusion. Here the standard Euler-Maruyama scheme might fail to provide a strongly convergent numerical scheme and taming is necessary. In contrast to the strong taming usually used, the method presented here provides a weaker form of taming.We present a strong convergence analysis by first proving convergence on a domain of high probability by using a cut-off or localisation, which then leads, combined with bounds on moments for both the SDE and the numerical scheme, by a bootstrapping argument to strong convergence. 1 stay near the origin and have p-moments at least up to p < 3. Then, any moment of the difference between u andũ n will explode for h → 0.There has been significant progress in the field of strongly convergent numerical schemes for SDEs with non-global Lipschitz-continuous nonlinearities. Standard references for numerical methods for SDEs, like [17] and [25,22], show strong convergence of the Euler-Maruyama method only for globally Lipschitz-continuous drift and diffusion terms. Higham, Mao and Stuart [10] proved a conditional result about strong convergence of the Euler-Maruyama discretization for non-globally Lipschitz SDEs given that moments of both solutions and the discretization stay bounded. This means that the question of strong convergence was replaced by the question whether moments of the numerical scheme stay bounded, but Hutzenthaler, Jentzen and Kloeden [12] answered the latter to the negative, even proving that moments of the Euler-Maruyama scheme always explode in finite time if either drift or diffusion term are not globally Lipschitz. Instead, they proposed a slight modification of the EM method, the so-called "tamed" EM method (and implicit equivalents) in [13,11].The numerical scheme (2) arising in the EnKF analysis bears resemblance to the "tamed" methods used throughout the literature. In case only the drift is non-globally Lipschitz a similar idea to (2) was used already in [13], but there the drift-tamed nonlinearity is strictly bounded by one, while in EnKF it is still allowed to grow linearly. In increment-ta...
The Bayesian approach to inverse problems provides a rigorous framework for the incorporation and quantification of uncertainties in measurements, parameters and models. We are interested in designing numerical methods which are robust w.r.t. the size of the observational noise, i.e., methods which behave well in case of concentrated posterior measures. The concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. However, it can pose a computational challenge for numerical methods based on the prior measure. We propose to employ the Laplace approximation of the posterior as the base measure for numerical integration in this context. The Laplace approximation is a Gaussian measure centered at the maximum a-posteriori estimate and with covariance matrix depending on the logposterior density. We discuss convergence results of the Laplace approximation in terms of the Hellinger distance and analyze the efficiency of Monte Carlo methods based on it. In particular, we show that Laplace-based importance sampling and Laplace-based quasi-Monte-Carlo methods are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands whereas prior-based importance sampling and plain quasi-Monte Carlo are not. Numerical experiments are presented to illustrate the theoretical findings.
How to cite:Please refer to published version for the most recent bibliographic citation information.
The Ensemble Kalman inversion (EKI) method is a method for the estimation of unknown parameters in the context of (Bayesian) inverse problems. The method approximates the underlying measure by an ensemble of particles and iteratively applies the ensemble Kalman update to evolve (the approximation of the) prior into the posterior measure. For the convergence analysis of the EKI it is common practice to derive a continuous version, replacing the iteration with a stochastic differential equation. In this paper we validate this approach by showing that the stochastic EKI iteration converges to paths of the continuous-time stochastic differential equation by considering both the nonlinear and linear setting, and we prove convergence in probability for the former, and convergence in moments for the latter. The methods employed can also be applied to the analysis of more general numerical schemes for stochastic differential equations in general.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.