2022
DOI: 10.5194/hess-26-5085-2022
|View full text |Cite
|
Sign up to set email alerts
|

Improving hydrologic models for predictions and process understanding using neural ODEs

Abstract: Abstract. Deep learning methods have frequently outperformed conceptual hydrologic models in rainfall-runoff modelling. Attempts of investigating such deep learning models internally are being made, but the traceability of model states and processes and their interrelations to model input and output is not yet fully understood. Direct interpretability of mechanistic processes has always been considered an asset of conceptual models that helps to gain system understanding aside of predictability. We introduce h… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(26 citation statements)
references
References 65 publications
1
25
0
Order By: Relevance
“…BESA also, in nature, is interpretable how they forecast scPDSI. LSTM or DL can also be improved in terms of forecast accuracies and interpretability when they are internally or externally coupled with physics or prior knowledge in the rim of TGDS (Höge et al., 2022; Jiang et al., 2020; Karpatne et al., 2017; Rackauckas et al., 2020). On the other hand, DL can extract data features that the conventional hydrologic approach can not use.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…BESA also, in nature, is interpretable how they forecast scPDSI. LSTM or DL can also be improved in terms of forecast accuracies and interpretability when they are internally or externally coupled with physics or prior knowledge in the rim of TGDS (Höge et al., 2022; Jiang et al., 2020; Karpatne et al., 2017; Rackauckas et al., 2020). On the other hand, DL can extract data features that the conventional hydrologic approach can not use.…”
Section: Resultsmentioning
confidence: 99%
“…However, without predefined structures governing a problem, ML models require extensive homogeneous training data and might lose their reliability under changing conditions (e.g., climate change), especially when “big data which can address dynamics” is not always available (Nearing et al., 2021; Rackauckas et al., 2020). Furthermore, although the interpretability of deep learning (DL) has improved with sensitivity analysis and integrated gradients and expected gradients methods that can facilitate the understanding of feature importance in a learning process (Jiang et al., 2020; Kratzert et al., 2019; Nearing et al., 2021; Samek et al., 2019; Sundararajan et al., 2017), those methods for interpretability have their own assumptions and/or limitations (Höge et al., 2022; Sundararajan et al., 2017). Therefore, “theory‐guided data science (TGDS)” and generalizable models with scientific consistency and interpretability are needed (Karpatne et al., 2017; Nearing et al., 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Future work should look at the contribution made by the LIS-FLOOD streamflow input data to the final LSTM forecasts and whether the impact is significant enough to warrant the additional resources. Alternatively, future research could investigate incorporating machine learning techniques directly into LISFLOOD either to speed up calculations (Höge et al, 2022;Rackauckas et al, 2020;Raissi et al, 2020) or to repli-cate processes that are not currently modelled in LISFLOOD within GloFAS such as the impact of reservoir management.…”
Section: Potential Improvementsmentioning
confidence: 99%
“…However, Frame et al (2022) found that the addition of the mass constraint reduced the skill when predicting extreme values suggesting further work is necessary. Alternatively, other studies have used neural networks in process-based models to solve differential equations more efficiently whilst allowing interpretability of the output (Höge et al, 2022;Rackauckas et al, 2020;Raissi et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…Bhasme et al (2022) used water storages simulated by a physics-based model as the inputs for a DL streamflow model, improving the DL model's physical consistency. Physical knowledge can also be directly encoded into a DL model (M. Chen et al, 2023;Höge et al, 2022;Kraft et al, 2022;Reichstein et al, 2019). For example, Jiang et al (2020) wrapped the physical processes depicted by a conceptual hydrological model (i.e., EXP-HYDRO) into a recurrent neural network (RNN).…”
Section: Introductionmentioning
confidence: 99%