2018 IEEE International Symposium on Circuits and Systems (ISCAS) 2018
DOI: 10.1109/iscas.2018.8351751
|View full text |Cite
|
Sign up to set email alerts
|

Data assimilation approach to analysing systems of ordinary differential equations

Abstract: The problem of parameter fitting for nonlinear oscillator models to noisy time series is addressed using a combination of Ensemble Kalman Filter and optimisation techniques. Encouraging preliminary results for acceptable sampling rates and noise levels are presented. Application to the understanding and control of tokamak nuclear reactor operation is discussed.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Implementation A Python implementation of DFBGN is available on Github. 3 Notation We use • to refer to the Euclidean norm of vectors and the operator 2-norm of matrices, and B(x, ∆) for x ∈ R n and ∆ > 0 to be the closed ball {y ∈ R n : y − x ≤ ∆}.…”
Section: Structure Of Papermentioning
confidence: 99%
See 1 more Smart Citation
“…Implementation A Python implementation of DFBGN is available on Github. 3 Notation We use • to refer to the Euclidean norm of vectors and the operator 2-norm of matrices, and B(x, ∆) for x ∈ R n and ∆ > 0 to be the closed ball {y ∈ R n : y − x ≤ ∆}.…”
Section: Structure Of Papermentioning
confidence: 99%
“…Existing model-based DFO techniques are primarily designed for small-to medium-scale problems, as the linear algebra cost of each iteration-largely due to the cost of constructing interpolation models-means that their runtime increases rapidly for large problems. There are several settings where scalable DFO algorithms may be useful, such as data assimilation [9,3], machine learning [67,35], generating adversarial examples for deep neural networks [2,68], image analysis [30], and as a possible proxy for global optimization methods [19].…”
Section: Introductionmentioning
confidence: 99%
“…We then describe a practical implementation of RSDFO-GN, which we call DFBGN (Derivative-Free Block Gauss-Newton). 3 Compared to existing methods, DFBGN reduces the linear algebra cost of model construction and the initial objective evaluation cost by allowing fewer interpolation points at every iteration. In order for DFBGN to have both scalability and a similar evaluation efficiency to existing methods (i.e.…”
Section: Contributionsmentioning
confidence: 99%
“…Existing model-based DFO techniques are primarily designed for small-to medium-scale problems, as the linear algebra cost of each iteration-largely due to the cost of constructing interpolation models-means that their runtime increases rapidly for large problems. There are several settings where scalable DFO algorithms may be useful, such as data assimilation [3,10], machine learning [39,71], generating adversarial examples for deep neural networks [2,75], image analysis [34], and as a possible proxy for global optimization methods [21].…”
Section: Introductionmentioning
confidence: 99%