2020
DOI: 10.1007/s40687-020-00217-4
|View full text |Cite
|
Sign up to set email alerts
|

Modeling of missing dynamical systems: deriving parametric models using a nonparametric framework

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
27
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(27 citation statements)
references
References 42 publications
0
27
0
Order By: Relevance
“…In view of statistical learning of the high-dimensional nonlinear flow map in (2.6), each linear-in-parameter reduced model provides an optimal approximation to the flow map in the function space spanned by the proposed terms. A possible future direction is to select adaptive-to-data hypothesis spaces in a nonparametric fashion [23] and analyze the distance between the flow map and the hypothesis space spanned by these proposed terms [45,46].…”
Section: Model Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…In view of statistical learning of the high-dimensional nonlinear flow map in (2.6), each linear-in-parameter reduced model provides an optimal approximation to the flow map in the function space spanned by the proposed terms. A possible future direction is to select adaptive-to-data hypothesis spaces in a nonparametric fashion [23] and analyze the distance between the flow map and the hypothesis space spanned by these proposed terms [45,46].…”
Section: Model Selectionmentioning
confidence: 99%
“…These developments demand a systematic understanding of model reduction from the perspectives of dynamical systems (see, e.g. [7,19,20]), numerical approximation [21,22], and statistical learning [23].…”
Section: Introductionmentioning
confidence: 99%
“…Constructing RKHS via orthogonal polynomials. Inspired by the Mercer's theorem [48] and reproducing kernel weighted Hilbert space used in [5,22,23,58], we consider the orthonormal polynomials with respect to L 2 (R d , W ) to construct our kernel and RKHS. Here,…”
Section: Proof To Begin With By Lemma 23k Defines a Kernel Andk(•mentioning
confidence: 99%
“…Indeed, for data that lie on a smooth d-dimensional manifold embedded in R n , one can construct a Mercer-type kernel based on the orthogonal basis functions obtained via the diffusion maps algorithm [9] and achieve the optimal rate as reported in [49]. While this approach has been explored in [23], the errors in the estimation of these eigenbasis,…”
mentioning
confidence: 99%
“…The past decades witness revolutionary developments of data-driven strategies, ranging from parametric models (see, e.g., [ 8 , 9 , 10 , 11 , 12 , 13 , 14 ] and the references therein) to nonparametric and machine learning methods (see, e.g., [ 15 , 16 , 17 , 18 ]). These developments demand a systematic understanding of model reduction from the perspectives of dynamical systems (see, e.g., [ 7 , 19 , 20 ]), numerical approximation [ 21 , 22 ], and statistical learning [ 17 , 23 ].…”
Section: Introductionmentioning
confidence: 99%