2020
DOI: 10.1137/19m1277059
|View full text |Cite
|
Sign up to set email alerts
|

On the Koopman Operator of Algorithms

Abstract: A systematic mathematical framework for the study of numerical algorithms would allow comparisons, facilitate conjugacy arguments, as well as enable the discovery of improved, accelerated, data-driven algorithms. Over the course of the last century, the Koopman operator has provided a mathematical framework for the study of dynamical systems, which facilitates conjugacy arguments and can provide efficient reduced descriptions. More recently, numerical approximations of the operator have made it possible to ana… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 27 publications
(21 citation statements)
references
References 38 publications
0
21
0
Order By: Relevance
“…The main idea is that the iterates of the algorithm, in the high-dimensional ambient space naturally "concentrate" (due to scale separation of time scales, eigenvalues, sensitivities) on lower-dimensional manifolds (e.g. see [47]). The common language of conditional KL expansions can then seamlessly (a) deduce the parametrization of a useful reduced "latent space"; (b) provide a local surrogate (what we would like to call a "targeted surrogate" in the sense that it is not global, but rather "just enough" for the next algorithm iteration) in this latent space that (c) can be used to design the next algorithm iteration in this targeted latent space; and then also (d) translate the results to the full space ("lifting"), where the full model will be briefly used (to briefly simulate, or to evaluate the expensive objective function).…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…The main idea is that the iterates of the algorithm, in the high-dimensional ambient space naturally "concentrate" (due to scale separation of time scales, eigenvalues, sensitivities) on lower-dimensional manifolds (e.g. see [47]). The common language of conditional KL expansions can then seamlessly (a) deduce the parametrization of a useful reduced "latent space"; (b) provide a local surrogate (what we would like to call a "targeted surrogate" in the sense that it is not global, but rather "just enough" for the next algorithm iteration) in this latent space that (c) can be used to design the next algorithm iteration in this targeted latent space; and then also (d) translate the results to the full space ("lifting"), where the full model will be briefly used (to briefly simulate, or to evaluate the expensive objective function).…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…Finally, we see this work as being part of a growing body of literature that is bringing attention to the fact that dynamical systems theory, and in particular KOT, can be used for problems that have historically relied on optimization theory [12][13][14]64 . These papers have highlighted the fact that, while optimization theory has its advantages, its de-emphasis on the past history of the system for computing the future state (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…Another author took a similar perspective to identify KOs of interest for NN training [10]. Unbeknownst (and in parallel) to us, general work connecting KOT to algorithms (including GD) was very recently explored and offered as a way in which NN training could be sped up [15]. However, while [15] focused on solving numerical problems by constructing the KO associated with GD, we provide a full fledged study of NN training that uses GD.…”
Section: Related Workmentioning
confidence: 99%