2021
DOI: 10.1109/tgrs.2020.3032790
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Super-Resolution for Downscaling Time-Evolving Atmospheric Fields With a Generative Adversarial Network

Abstract: Generative adversarial networks (GANs) have been recently adopted for super-resolution, an application closely related to what is referred to as "downscaling" in the atmospheric sciences: improving the spatial resolution of low-resolution images. The ability of conditional GANs to generate an ensemble of solutions for a given input lends itself naturally to stochastic downscaling, but the stochastic nature of GANs is not usually considered in super-resolution applications. Here, we introduce a recurrent, stoch… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
109
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 102 publications
(110 citation statements)
references
References 36 publications
1
109
0
Order By: Relevance
“…Its measurement principle is detailed in Garrett et al (2012). Several works exploited MASC data to investigate geometry and fall speed characteristics of hydrometeors (Garrett and Yuter, 2014;Garrett et al, 2015;Jiang et al, 2019), and others were devoted to hydrometeor classification techniques as Praz et al (2017); Hicks and Notaros (2019); Leinonen and Berne (2020).…”
Section: The Multi-angle Snowflake Camera (Masc)mentioning
confidence: 99%
See 2 more Smart Citations
“…Its measurement principle is detailed in Garrett et al (2012). Several works exploited MASC data to investigate geometry and fall speed characteristics of hydrometeors (Garrett and Yuter, 2014;Garrett et al, 2015;Jiang et al, 2019), and others were devoted to hydrometeor classification techniques as Praz et al (2017); Hicks and Notaros (2019); Leinonen and Berne (2020).…”
Section: The Multi-angle Snowflake Camera (Masc)mentioning
confidence: 99%
“…More recently, accurate and high-resolution depictions of snowflakes could be obtained with imagers like the Snow Video Imager/Particle Image Probe (Newman et al, 2009) or with the Multi-Angle Snowflake Camera (MASC Garrett et al, 2012). The availability of actual images has promoted the development and rapid improvement of several automatic hydrometeor classification techniques (Grazioli et al, 2014;Gavrilov et al, 2015;Praz et al, 2017;Leinonen and Berne, 2020) adapted to the data of these sensors. While the accuracy of the measurements of fall velocity provided by those instruments is often hampered by wind and turbulence (Nešpor et al, 2000;Garrett and Yuter, 2014;Fitch et al, 2021), the added value in terms of microphysical characterization is significant.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A third approach is a Linear Inverse Modeling framework (Newman et al, 2003;Martinez-Villalobos et al, 2017), where the predictive modes are represented as covariance functions in a reduced space (e.g., functions of PCAs). We can also model the system with reduced complexity and represent higher complexity processes as AI-driven stochastic processes (Chattopadhyay et al, 2020;Crommelin and Edeling, 2020;Alcala and Timofeyev, 2020;Leinonen et al, 2020). To characterize noise relevant for predicting high-frequency signals, convection-resolving simulations such as the DYAMOND ensemble (Stevens et al, 2019) provide comprehensive data coverage to characterize variability in small-scale processes (Christensen, 2020).…”
Section: (A) the Stochastic Surrogate Modelsmentioning
confidence: 99%
“…Other studies have used a recurrent neural network structure (Leinonen et al, 2020) to permit generated outputs to evolve in time in a consistent manner, so that the GAN generator can model the time evolution of fields and the discriminator can evaluate the plausibility of image sequences rather than single images. The 3-hourly data that we use in this study are potentially too coarse to consider time dependencies: short-duration events may disappear between timesteps, and even for long-duration events, 3 hours may be too long to capture a smooth transition from one timestep to the next, as preferred by the learning process.…”
mentioning
confidence: 99%