2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093564
|View full text |Cite
|
Sign up to set email alerts
|

Cloud Removal in Satellite Images Using Spatiotemporal Generative Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
112
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 82 publications
(112 citation statements)
references
References 25 publications
0
112
0
Order By: Relevance
“…Unlike the model of [12], our architecture follows that of cycle-consistent GAN and has the advantage of not requiring pixelwise correspondences between cloudy and noncloudy optical training data, thereby also allowing for training or fine-tuning on data where such a requirement may not be met. Complementary to the SAR-optical data fusion approach to cloud removal, recent contributions proposed integrating information of repeated observations over time [10], [11]. The work indicates promising results but trades temporal resolution for obtaining a single cloud-free observation, whereas our approach predicts one cloud-free output per cloudy input image and, thus, allows for sequence-to-sequence translation.…”
Section: A Related Workmentioning
confidence: 98%
See 1 more Smart Citation
“…Unlike the model of [12], our architecture follows that of cycle-consistent GAN and has the advantage of not requiring pixelwise correspondences between cloudy and noncloudy optical training data, thereby also allowing for training or fine-tuning on data where such a requirement may not be met. Complementary to the SAR-optical data fusion approach to cloud removal, recent contributions proposed integrating information of repeated observations over time [10], [11]. The work indicates promising results but trades temporal resolution for obtaining a single cloud-free observation, whereas our approach predicts one cloud-free output per cloudy input image and, thus, allows for sequence-to-sequence translation.…”
Section: A Related Workmentioning
confidence: 98%
“…The work indicates promising results but trades temporal resolution for obtaining a single cloud-free observation, whereas our approach predicts one cloud-free output per cloudy input image and, thus, allows for sequence-to-sequence translation. Moreover, current multitemporal approaches make strong assumptions about the maximum permissible amount of cloud-coverage affecting individual images in the input time series, which is required to be no more than 25% or 50% of cloud coverage for the method of [10] and 10%-30% in the work of [11]. Our curated data sets evidence that such strict requirements on the percentage of cloudiness may, oftentimes, not be met in practice.…”
Section: A Related Workmentioning
confidence: 99%
“…Sarukkai et al. [SJUE19] cast the problem of cloud removal as a conditional image synthesis challenge, and propose a trainable spatio‐temporal generator network to remove clouds. Their model is trained and validated on a new large‐scale spatio‐temporal dataset constituted by real images.…”
Section: Related Workmentioning
confidence: 99%
“…However, per-pixel annotations of clouds are time consuming and tedious, while per-image annotated datasets only require one click for the whole image. In this paper, we present a cloud detector that has learned from a per-image annotated dataset of panchromatic satellite images, derived from the single-image dataset [18] appearing in Figure 1. The approach is generic and can be extended to any sort of single-band image.…”
Section: Introductionmentioning
confidence: 99%