2020
DOI: 10.48550/arxiv.2010.09694
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DAN -- An optimal Data Assimilation framework based on machine learning Recurrent Networks

Abstract: Data assimilation algorithms aim at forecasting the state of a dynamical system by combining a mathematical representation of the system with noisy observations thereof. We propose a fully data driven deep learning architecture generalizing recurrent Elman networks and data assimilation algorithms which provably reaches the same prediction goals as the latter. On numerical experiments based on the well-known Lorenz system and when suitably trained using snapshots of the system trajectory (i.e. batches of state… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…From a deep learning point of view, both operators naturally relate to RNN. As such, one may explore state-of-the-art RNN such as LSTM and Gated Recurrent Unit (GRU) [31] as in [199], [79].…”
Section: Error Specification In Da: Traditional and ML Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…From a deep learning point of view, both operators naturally relate to RNN. As such, one may explore state-of-the-art RNN such as LSTM and Gated Recurrent Unit (GRU) [31] as in [199], [79].…”
Section: Error Specification In Da: Traditional and ML Methodsmentioning
confidence: 99%
“…One may also explore physically-constrained parameterisations, typically neural Ordinary Differential Equation (ODE)/PDE schemes for the forecasting operator if the underlying physics [200], [201], [194] are known and/or explicit Kalman recursion rule under additional linear-Gaussian hypothesis for the posterior and the observation operator [202], [79]. Regarding the learning step, these approaches may adapt classic stochastic optimisation algorithms [31] with randomised re-initialisation steps of the internal states of the recurrent blocks [199]; • 4DVar-DA-inspired neural schemes: from a neural perspective, variational DA combines a variational cost (see Section II-B) and a gradient-based optimizer using an adjoint method [48]. Assuming that both the observation and dynamical operators are implemented as neural operators, the automatic differentiation embedded in deep learning schemes makes it convenient to apply a gradient descent with respect to state sequence and/or model parameters, with no need to explicitly code the adjoint operators.…”
Section: Error Specification In Da: Traditional and ML Methodsmentioning
confidence: 99%
“…Recently, a rich literature has emerged to bridge data assimilation and deep learning (Abdalla et al, 2021;Barthelemy et al, 2021;Bocquet et al, 2020;Boudier et al, 2020;Nonnenmacher & Greenberg, 2021). It provides new minimization schemes as well as new means to explore data assimilation problems when the observation operators and/or the dynamical priors are not explicitly known.…”
Section: Data Assimilation For Sea Surface Dynamicsmentioning
confidence: 99%