2024
DOI: 10.1029/2023wr036170
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Hydrological Modeling With Physics‐Encoded Deep Learning: A General Framework and Its Application in the Amazon

Chao Wang,
Shijie Jiang,
Yi Zheng
et al.

Abstract: While deep learning (DL) models exhibit superior simulation accuracy over traditional distributed hydrological models (DHMs), their main limitations lie in opacity and the absence of underlying physical mechanisms. The pursuit of synergies between DL and DHMs is an engaging research domain, yet a definitive roadmap remains elusive. In this study, a novel framework that seamlessly integrates a process‐based hydrological model encoded as a neural network (NN), an additional NN for mapping spatially distributed a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 123 publications
(157 reference statements)
0
1
0
Order By: Relevance
“…These shortcomings highlight the value of hybrid (Reichstein et al, 2019) or differentiable modeling (Shen et al, 2023) strategies in Earth sciences that aim to be effective in creating inherently interpretable models, that is, models that follow a domain-specific set of constraints that make the reasoning processes understandable (Rudin et al, 2022). These strategies involve the integration of physical relationships or models into ML architectures (e.g., Jiang et al, 2020;Kraft et al, 2022;C. Wang et al, 2024).…”
Section: Gap Between Complexity and Interpretabilitymentioning
confidence: 99%
“…These shortcomings highlight the value of hybrid (Reichstein et al, 2019) or differentiable modeling (Shen et al, 2023) strategies in Earth sciences that aim to be effective in creating inherently interpretable models, that is, models that follow a domain-specific set of constraints that make the reasoning processes understandable (Rudin et al, 2022). These strategies involve the integration of physical relationships or models into ML architectures (e.g., Jiang et al, 2020;Kraft et al, 2022;C. Wang et al, 2024).…”
Section: Gap Between Complexity and Interpretabilitymentioning
confidence: 99%