2019
DOI: 10.1214/19-ejs1615
|View full text |Cite
|
Sign up to set email alerts
|

Higher order Langevin Monte Carlo algorithm

Abstract: A new (unadjusted) Langevin Monte Carlo (LMC) algorithm with improved rates in total variation and in Wasserstein distance is presented. All these are obtained in the context of sampling from a target distribution π that has a density on R d known up to a normalizing constant. Crucially, the Langevin SDE associated with the target distribution π is assumed to have a locally Lipschitz drift coefficient such that its second derivative is locally Hölder continuous with exponent β ∈ (0, 1]. Non-asymptotic bounds a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

4
5

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 29 publications
0
12
0
Order By: Relevance
“…In particular, in [1,6,14,23,34,35,39] an L p -error rate of at least 1/2 has been proven for approximating X 1 by explicit Euler-type methods, e.g., tamed, projected or truncated Euler schemes, for suitable ranges of the values of p and for subclasses of such SDEs with coefficients that at least satisfy a monotone-type condition and a coercivity condition and are locally Lipschitz continuous with a polynomially growing (local) Lipschitz constant. We add that important applications of these results are emerging in areas of intense interest, due to their central role in Data Science and AI, such as MCMC sampling algorithms, see [3,36], and stochastic optimizers for fine tuning (artificial) neural networks and, more broadly, for solving non-convex stochastic optimization problems, see [21,20].…”
Section: Introductionmentioning
confidence: 96%
“…In particular, in [1,6,14,23,34,35,39] an L p -error rate of at least 1/2 has been proven for approximating X 1 by explicit Euler-type methods, e.g., tamed, projected or truncated Euler schemes, for suitable ranges of the values of p and for subclasses of such SDEs with coefficients that at least satisfy a monotone-type condition and a coercivity condition and are locally Lipschitz continuous with a polynomially growing (local) Lipschitz constant. We add that important applications of these results are emerging in areas of intense interest, due to their central role in Data Science and AI, such as MCMC sampling algorithms, see [3,36], and stochastic optimizers for fine tuning (artificial) neural networks and, more broadly, for solving non-convex stochastic optimization problems, see [21,20].…”
Section: Introductionmentioning
confidence: 96%
“…As many (stochastic) gradient descent methods can be viewed as Euler discretizations of SDE (3), their application to super-linearly growing stochastic gradient is problematic, which is confirmed by the numerical experiments in [27] for the SGLD algorithm. To cope with this problem, [27] considers the use of a taming technique, see, e.g., [22], [30], [31], [4], [32], and a tamed unadjusted stochastic Langevin algorithm (TUSLA) is proposed, which is given by…”
Section: Introductionmentioning
confidence: 99%
“…These equations have many important applications, for example, in Bayesian statistics and molecular dynamics. We refer to [10,22,23,45,50], and the references therein.…”
Section: Introductionmentioning
confidence: 99%