2019
DOI: 10.1016/j.jcp.2018.10.024
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning algorithm for data-driven simulation of noisy dynamical system

Abstract: We present a deep learning model, DE-LSTM, for the simulation of a stochastic process with an underlying nonlinear dynamics. The deep learning model aims to approximate the probability density function of a stochastic process via numerical discretization and the underlying nonlinear dynamics is modeled by the Long Short-Term Memory (LSTM) network. It is shown that, when the numerical discretization is used, the function estimation problem can be solved by a multi-label classification problem. A penalized maxim… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
66
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 99 publications
(66 citation statements)
references
References 37 publications
0
66
0
Order By: Relevance
“…Deep neural networks are capable of approximating nonlinear dynamical systems as shown in many studies 102,103,106,123 . The general nonlinear dynamical system can be presented by an equation of the form…”
Section: Learning Frameworkmentioning
confidence: 99%
“…Deep neural networks are capable of approximating nonlinear dynamical systems as shown in many studies 102,103,106,123 . The general nonlinear dynamical system can be presented by an equation of the form…”
Section: Learning Frameworkmentioning
confidence: 99%
“…Here, θ j is the j-th column of θ. Note that after a proper rescaling, the summations in the original ESN formulation (7) are time averaging, while the summations in (19) correspond to an ensemble averaging, or a Monte Carlo sampling. Hence, when the dynamical system is ergodic, the time series data is missing in random, and the number of observation, ωT , is large enough, the linear map of the fixed-point method, θ k , approaches to θ * as the fixed-point iteration converges.…”
Section: Fixed-point Esn For Sparse Observationmentioning
confidence: 99%
“…Over the last decade, artificial neural network (ANN) has been extensively studied due to its strength in identifying complex nonlinear structure in data [1,2,3]. Recent studies on the application of ANN on the data-driven modeling of physical systems have shown promising results [4,5,6,7]. In driven reconstruction of the missing information by using recurrent neural networks (RNN).…”
Section: Introductionmentioning
confidence: 99%
“…Since a number of authors have begun to consider the use of machine/deep learning for problems in traditional computational physics, see e.g. [1,2,3,4,5,6,7,8,9,10,11,12], we are motivated to consider methodologies that constrain the interpolatory results of a network to be contained within a physically admissible region. Quite recently, [13] proposed adding physical constraints to generative adversarial networks (GANs) also considering projection as we do, while stressing the interplay between scientific computing and machine learning; we refer the interested reader to their work for even more motivation for such approaches.…”
Section: Introductionmentioning
confidence: 99%