2022
DOI: 10.5194/gmd-15-6747-2022
|View full text |Cite
|
Sign up to set email alerts
|

Downscaling multi-model climate projection ensembles with deep learning (DeepESD): contribution to CORDEX EUR-44

Abstract: Abstract. Deep learning (DL) has recently emerged as an innovative tool to downscale climate variables from large-scale atmospheric fields under the perfect-prognosis (PP) approach. Different convolutional neural networks (CNNs) have been applied under present-day conditions with promising results, but little is known about their suitability for extrapolating future climate change conditions. Here, we analyze this problem from a multi-model perspective, developing and evaluating an ensemble of CNN-based downsc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(31 citation statements)
references
References 42 publications
2
29
0
Order By: Relevance
“…Code and data availability. To promote transparency and reproducibility of our results, we provide the data (DOI: https://doi.org/10.5281/zenodo.6823421, Baño-Medina et al, 2022a) and the companion Jupyter notebook (DOI: https://doi.org/10.5281/zenodo.6828303, Baño-Medina et al, 2022b), explaining how DeepESD has been produced. This notebook is based on the R software and builds on the climate4R framework, a set of libraries specifically designed for climate data access and post-processing (Iturbide et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…Code and data availability. To promote transparency and reproducibility of our results, we provide the data (DOI: https://doi.org/10.5281/zenodo.6823421, Baño-Medina et al, 2022a) and the companion Jupyter notebook (DOI: https://doi.org/10.5281/zenodo.6828303, Baño-Medina et al, 2022b), explaining how DeepESD has been produced. This notebook is based on the R software and builds on the climate4R framework, a set of libraries specifically designed for climate data access and post-processing (Iturbide et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…Taking into account the PP assumption that GCM predictors have to be realistically simulated (when compared with the ERA-Interim reanalysis in this case), following (Baño-Medina et al, 2022) we perform a signal-preserving adjustment of the monthly mean and variance of the GCM predictors to increase the distributional similarity with their counterpart reanalysis predictor fields.…”
Section: Region Of Study and Datamentioning
confidence: 99%
“…However, there is no best model outperforming the others in all regions for all scores and, overall, there is no reason to discard any of these models. To test the extrapolation capability of the CNN models under future climate change conditions (when applied to predictors from GCM projections; see Section 2.1), we follow previous work and use the "raw" GCM projections as pseudo-reality (Vrac et al, 2007;Baño-Medina et al, 2022). We divide the future scenario into three different periods (2006-2040, 2041-2070 and 2071-2100) and compute the delta change between the future and historical scenarios for the GCM and CNN models.…”
Section: Standard Evaluation: Cross-validation and Extrapolationmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to DD, the implementation of SD is fast and far less computationally intensive (Wang, Liu, et al., 2021). However, SD models can perform poorly under extrapolation to future climates as few methods account for nonstationary relationships between predictors and predictands under climate change (Hernanz et al., 2022; Hewitson et al., 2014; Lanzante et al., 2018; Salvi et al., 2016; Schoof, 2013), although there are some exceptions (Baño‐Medina et al., 2022; Pichuka & Maity, 2018). Despite the fairly low cost of SD, it is only possible to implement it in regions where fine‐scale observational data are available for training, and typically little is known about its ability to perform downscaling in regions outside of the training domain (Wang, Tian, et al., 2021).…”
Section: Introductionmentioning
confidence: 99%