2022
DOI: 10.1016/j.jhydrol.2021.127244
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian convolutional neural networks for predicting the terrestrial water storage anomalies during GRACE and GRACE-FO gap

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
52
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 54 publications
(54 citation statements)
references
References 58 publications
2
52
0
Order By: Relevance
“…Our results agree with the learning methods that incorporated hydroclimatic data to reconstruct the gap in terms of CC and NSE and outperform them in term of RMSE. Specifically, the results are more consistent with DNN and Bayesian CNN (BCNN) reconstructions (Mo et al, 2022;Sun et al, 2020), than SWARM driven, statistical learning driven (ANN, ARX, MLR) outputs from (Li et al, 2020). The training periods are similar in both studies (04/2002-01-06/2014) (Mo et al, 2022;Sun et al, 2020), but the testing period of performance is different: 04/2014 to 06/2017 (Sun et al, 2020), and 04/2014-06/2017, and 01/2018-09/2020 (Mo et al, 2022).…”
Section: Comparison With Other Reconstructed Datasupporting
confidence: 57%
See 2 more Smart Citations
“…Our results agree with the learning methods that incorporated hydroclimatic data to reconstruct the gap in terms of CC and NSE and outperform them in term of RMSE. Specifically, the results are more consistent with DNN and Bayesian CNN (BCNN) reconstructions (Mo et al, 2022;Sun et al, 2020), than SWARM driven, statistical learning driven (ANN, ARX, MLR) outputs from (Li et al, 2020). The training periods are similar in both studies (04/2002-01-06/2014) (Mo et al, 2022;Sun et al, 2020), but the testing period of performance is different: 04/2014 to 06/2017 (Sun et al, 2020), and 04/2014-06/2017, and 01/2018-09/2020 (Mo et al, 2022).…”
Section: Comparison With Other Reconstructed Datasupporting
confidence: 57%
“…We compared our reconstructed data with data from four other GRACE reconstructions (Li et al., 2020; Mo et al., 2022; Sun et al., 2020). These studies were concerned with filling the gap between the missions only.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Drought Condition Classification Based on the Water Storage Deficit Index C3. Comparison of the testing Nash-Sutcliffe efficiency coefficient (NSE) accuracy obtained in previous study (Mo et al, 2022) and this study for (a-c) the terrestrial water storage anomaly (TWSA) and (d-f) the water storage deficit (WSD) signals. The major differences between Mo et al (2022) and this study are as follows: (a) Mo et al (2022) used only four predictors (simulated/reanalyzed TWSA, cumulative water storage change, P, and T) derived from the ERA5 land data set without utilizing the data source selection strategy; (b) the Jet Propulsion Laboratory Gravity Recovery and Climate Experiment (GRACE) mascon product was used in Mo et al (2022), while a weighted average GRACE product is used here; and (c) The Bayesian Convolutional Neural Network method employed in this study has been improved (see Appendix A).…”
Section: Table C2mentioning
confidence: 91%
“…In contrast, mathematical and statistical approaches, although containing no physical information, were also proven to reasonably fill the GRACE and GRACE-FO gaps [13]. To include physical information into TWS reconstruction, mathematical algorithms were either modified with meteorological parameters [14,15] or trained with SLR data [16], with only GRACE data [17], with GRACE and Swarm data [18], or with a combination of satellite data with meteorological/hydrological parameters [19,20], and were demonstrated to correlate well with GRACE [21,22]. The main advantage of mathematical approaches is that they are able to predict climate signals such as droughts [23] or floods [24] to reconstruct signals of the El Niño oscillation [22] or human-induced TWS changes [25] at any spatial resolution.…”
Section: Introductionmentioning
confidence: 99%