2020
DOI: 10.48550/arxiv.2002.11309
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Parametric Fokker-Planck Equations

Shu Liu,
Wuchen Li,
Hongyuan Zha
et al.

Abstract: In this paper, we develop and analyze numerical methods for high dimensional Fokker-Planck equations by leveraging generative models from deep learning. Our starting point is a formulation of the Fokker-Planck equation as a system of ordinary differential equations (ODEs) on finite-dimensional parameter space with the parameters inherited from generative models such as normalizing flows. We call such ODEs neural parametric Fokker-Planck equation. The fact that the Fokker-Planck equation can be viewed as the L … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…These approaches have been widely applied to many realistic problems, such as fluid mechanics [29,2], high dimensional PDEs (with applications in computational finance) [11,41], uncertainty quantification [39,27,42,14,22], to name just a few. Meanwhile, generative models such as generative adversarial networks [9], variational autoencoder [17] and normalizing flow (NF) [24,30], have also been successfully applied to learn forward and inverse PDEs [3,43,40,20]. For instance, physics-informed generative adversarial model was proposed in [38] to tackle high dimensional stochastic differential equations.…”
Section: Introductionmentioning
confidence: 99%
“…These approaches have been widely applied to many realistic problems, such as fluid mechanics [29,2], high dimensional PDEs (with applications in computational finance) [11,41], uncertainty quantification [39,27,42,14,22], to name just a few. Meanwhile, generative models such as generative adversarial networks [9], variational autoencoder [17] and normalizing flow (NF) [24,30], have also been successfully applied to learn forward and inverse PDEs [3,43,40,20]. For instance, physics-informed generative adversarial model was proposed in [38] to tackle high dimensional stochastic differential equations.…”
Section: Introductionmentioning
confidence: 99%
“…While traditional gridbased numerical methods, such as finite element methods and finite difference methods [61,32,56] can be employed to solve the Fokker-Plank equation, they are usually limited to low-dimensional problems. On the other hand, neural network-based methods has been successfully used in solving high dimensional PDEs [29,18,36,69,52,30,70,17], including the recent application in solving the high-dimensional Fokker-Plank equation [66,70,35]. These successes encourage us to also use deep learning to solve the approximate Fokker-Planck PDE.…”
Section: Introductionmentioning
confidence: 99%
“…Natural gradient descent has been proven to be advantageous in various problems in machine learning and statistical inference, such as blind source separation [3], reinforcement learning [30] and neural network training [25,24,23,34,29,15,35,19]. Further applications include solution methods for high-dimensional Fokker-Planck equations [16,21]. It is a natural framework to include the curvature of the loss function based on the natural Riemannian structure of the ρ-space [2], and thus accelerate the rate of convergence in the θ-space and directly control the model in the ρ-space instead of the θ-space.…”
mentioning
confidence: 99%
“…These methods take advantage of the computational aspects of feed-forward neural networks by recycling computations during the forward and backward passes through the network. Furthermore, computational methods for natural gradients generated by the Wasserstein metric in the ρ-space are either variational [15,7,19,16,21,41] or relying on the entropic regularization of the Wasserstein distance [35]. The variational techniques are based on the proximal approximation (1.2), whereas entropic regularization techniques utilize computationally efficient Sinkhorn divergences [35].…”
mentioning
confidence: 99%
See 1 more Smart Citation