2021
DOI: 10.1098/rspa.2021.0162
|View full text |Cite
|
Sign up to set email alerts
|

Parameterized neural ordinary differential equations: applications to computational physics problems

Abstract: This work proposes an extension of neural ordinary differential equations (NODEs) by introducing an additional set of ODE input parameters to NODEs. This extension allows NODEs to learn multiple dynamics specified by the input parameter instances. Our extension is inspired by the concept of parameterized ODEs, which are widely investigated in computational science and engineering contexts, where characteristics of the governing equations vary over the input parameters. We apply the proposed parameterized NODEs… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(14 citation statements)
references
References 53 publications
0
14
0
Order By: Relevance
“…It is a particularly promising approach for learning latent dynamics of dynamical systems. NODE naturally fits well as a latent-dynamics model in reduced-order modeling of physical processes because it learns the latent dynamics in the form of ODEs [40]. Furthermore, NODE is flexible in learning from irregularly sampled time-series data [40][41][42][43].…”
Section: Ode Parameter Estimation Approachesmentioning
confidence: 99%
“…It is a particularly promising approach for learning latent dynamics of dynamical systems. NODE naturally fits well as a latent-dynamics model in reduced-order modeling of physical processes because it learns the latent dynamics in the form of ODEs [40]. Furthermore, NODE is flexible in learning from irregularly sampled time-series data [40][41][42][43].…”
Section: Ode Parameter Estimation Approachesmentioning
confidence: 99%
“…Neural ODEs offer a promising approach for hybrid modeling and system identification [12][13][14][15]. Furthermore, the neural network's architecture can be optimized to represent experimental data better.…”
Section: Neural Ordinary Differential Equationsmentioning
confidence: 99%
“…How do you model the operator Ψ in Eq (7)? Many neural dynamical models for complex networks (e.g., [8]) follow a common encoder-decoder architecture in graph neural networks [2] and also in general neural ODEs [15]. In NDCN [8] (see Fig 1 .(a)), the authors defined an encoder f E : R → R d and a decoder f D : R d → R, where d ∈ Z + is the embedding dimension, so that each initial value x i (0) is encoded jointly as d state variables in the vector h i (0) = f E (x(0)).…”
Section: Encoder-decoder Based Neural Dynamical Modelsmentioning
confidence: 99%