2021
DOI: 10.48550/arxiv.2105.10102
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Error Bounds of the Invariant Statistics in Machine Learning of Ergodic Itô Diffusions

He Zhang,
John Harlim,
Xiantao Li

Abstract: This paper studies the theoretical underpinnings of machine learning of ergodic Itô diffusions. The objective is to understand the convergence properties of the invariant statistics when the underlying system of stochastic differential equations (SDEs) is empirically estimated with a supervised regression framework. Using the perturbation theory of ergodic Markov chains and the linear response theory, we deduce a linear dependence of the errors of one-point and two-point invariant statistics on the error in th… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…Theoretically, we deduced an error bound for the proposed approach for an SDE with global Lipschitz drift coefficients and constant diffusion matrix, accounting errors contributed by the discretization of the SDE in the training data, the regression of the drift terms using fully-connected ReLU networks with arbitrary width and layers, and the regression solution to the Fokker-Planck PDE using a fully-connected two-layer neural network with the ReLU 3 activation function. This error bound is deduced under various assumptions that underpin the perturbation theory result in [72], generalization errors in approximating Lipschitz continuous functions in [28] and in solving PDEs in [41].…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Theoretically, we deduced an error bound for the proposed approach for an SDE with global Lipschitz drift coefficients and constant diffusion matrix, accounting errors contributed by the discretization of the SDE in the training data, the regression of the drift terms using fully-connected ReLU networks with arbitrary width and layers, and the regression solution to the Fokker-Planck PDE using a fully-connected two-layer neural network with the ReLU 3 activation function. This error bound is deduced under various assumptions that underpin the perturbation theory result in [72], generalization errors in approximating Lipschitz continuous functions in [28] and in solving PDEs in [41].…”
Section: Discussionmentioning
confidence: 99%
“…The main goals of this theoretical study are to 1) understand under which mathematical assumptions can the density estimation problem be well-posed, 2) establish the convergence of the proposed scheme, and 3) identify the error in terms of training sample size, width/length of the neural-network models, discretization time step and noise amplitudes in the training data, and the dimension of the stochastic processes. In conjunction, we will also verify whether the perturbation theory [72] is valid. Particularly, we will check whether the stochastic process associated with the estimated drift and diffusion terms (obtained from deep learning regression in the first step) can indeed estimate the underlying invariant measure accurately.…”
Section: Introductionmentioning
confidence: 93%
See 2 more Smart Citations