2019
DOI: 10.3390/rs11171962
|View full text |Cite
|
Sign up to set email alerts
|

Retrieval of Cloud Optical Thickness from Sky-View Camera Images using a Deep Convolutional Neural Network based on Three-Dimensional Radiative Transfer

Abstract: Observation of the spatial distribution of cloud optical thickness (COT) is useful for the prediction and diagnosis of photovoltaic power generation. However, there is not a one-to-one relationship between transmitted radiance and COT (so-called COT ambiguity), and it is difficult to estimate COT because of three-dimensional (3D) radiative transfer effects. We propose a method to train a convolutional neural network (CNN) based on a 3D radiative transfer model, which enables the quick estimation of the slant-c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 46 publications
0
20
0
Order By: Relevance
“…Based on digital sky images recorded by a previously calibrated sky camera system, we estimated the COD spatial distribution using a deep-learning algorithm based on 3D radiative transfer (Masuda et al 2019). The angular distribution of visible transmitted radiance training data was generated by simulations using Monte Carlo Atmospheric Radiative Transfer Simulator (MCARaTS), a 3D radiative transfer model (Iwabuchi 2006), assuming various 3D cloud fields obtained from highresolution cloud dynamical simulations.…”
Section: Resultsmentioning
confidence: 99%
“…Based on digital sky images recorded by a previously calibrated sky camera system, we estimated the COD spatial distribution using a deep-learning algorithm based on 3D radiative transfer (Masuda et al 2019). The angular distribution of visible transmitted radiance training data was generated by simulations using Monte Carlo Atmospheric Radiative Transfer Simulator (MCARaTS), a 3D radiative transfer model (Iwabuchi 2006), assuming various 3D cloud fields obtained from highresolution cloud dynamical simulations.…”
Section: Resultsmentioning
confidence: 99%
“…Cloud field properties are derived from LES and SSI from 3‐D radiative transfer, providing known inputs and outputs and the need to learn the complex relationship between them. This problem is an ideal candidate for machine learning, which is increasingly being adopted in the atmospheric sciences (e.g., Glassmeier et al, 2019; Masuda et al, 2019; McGovern et al, 2017; O'Gorman & Dwyer, 2018; Thampi et al, 2017). One of the common criticisms of machine learning approaches is that they are used as a “black box” providing little physical interpretability.…”
Section: Introductionmentioning
confidence: 99%
“…The CNN is trained on a large database where the multi-spectral radiances are simulated with 3D RT and the pixel-scale cloud properties (typically, optical thickness and column-mean effective droplet radius) are known from the LES and assumed microphysics. Deep learning has thus been applied to both nadir satellite imagery [40] and to an array of ground-based sky cameras [41].…”
Section: Contextmentioning
confidence: 99%