2021
DOI: 10.1017/jfm.2021.271
|View full text |Cite
|
Sign up to set email alerts
|

Sparsity-promoting algorithms for the discovery of informative Koopman-invariant subspaces

Abstract: Abstract

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(10 citation statements)
references
References 77 publications
0
10
0
Order By: Relevance
“…4, allowing the user to select which among the models best matches the desired time-series behavior. Pan et al [60] employ a similar manual examination of an FoM followed by refitting, to downselect from a library of eigenvectors in the context of Koopman operators. This method (do one complete STLSQ run and record the models at each step) is similar to how Lasso is done, and is an alternative to sweeping the sparsity parameter λ with multiple complete SINDy runs.…”
Section: ) Figure-of-merit Historiesmentioning
confidence: 99%
“…4, allowing the user to select which among the models best matches the desired time-series behavior. Pan et al [60] employ a similar manual examination of an FoM followed by refitting, to downselect from a library of eigenvectors in the context of Koopman operators. This method (do one complete STLSQ run and record the models at each step) is similar to how Lasso is done, and is an alternative to sweeping the sparsity parameter λ with multiple complete SINDy runs.…”
Section: ) Figure-of-merit Historiesmentioning
confidence: 99%
“…Hence, computing the spectrum of matrix K by direct SVD is computationally intensive, even though most of the eigenvalues will be zero. Method-of-snapshot, parallel version of SVD, or randomized SVD can be used to attack this difficulty [10,11]. In this project, we use a more simple randomized method by generalizing the DMD algorithm.…”
Section: Dynamic Mode Decompositionmentioning
confidence: 99%
“…A commonly used projection matrix is based on SVD of the input matrix X = UΣV * and the projection matrix is chosen to be P = U * , here, * represents the conjugate transpose of a matrix. Using Equation (11) and SVD of X, the operator on the projected space can be formulated as K = U * YVΣ −1 .…”
Section: Remark 2 (Projection By Svd)mentioning
confidence: 99%
“…Linear subspaces, however, are highly restrictive and ill-suited to handle parametric dependencies. Attempts to circumvent these shortcomings include using multiple linear subspaces covering different temporal or spatial domains [9,10,11,12], multi-resolution DMD [13], diffusion map embeddings [14,15,16,17], or more recently, using deep learning to compute underlying nonlinear subspaces which are advantageous for dynamics, both linear and nonlinear [18,19,20,21,22]. These techniques represent data-driven architectures for extracting order-parameter descriptions of the underlying spatio-temporal dynamics observed [23].…”
Section: Introductionmentioning
confidence: 99%