2020
DOI: 10.1109/tgrs.2020.2980417
|View full text |Cite
|
Sign up to set email alerts
|

ColorMapGAN: Unsupervised Domain Adaptation for Semantic Segmentation Using Color Mapping Generative Adversarial Networks

Abstract: Due to the various reasons such as atmospheric effects and differences in acquisition, it is often the case that there exists a large difference between spectral bands of satellite images collected from different geographic locations. The large shift between spectral distributions of training and test data causes the current state of the art supervised learning approaches to output unsatisfactory maps. We present a novel semantic segmentation framework that is robust to such shift. The key component of the pro… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
77
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 129 publications
(108 citation statements)
references
References 61 publications
0
77
0
Order By: Relevance
“…Table II shows the results in the target domain (Proba-V) using the PV24 dataset with the trained DA transformation G PV→LU (called full DA in the table) and without it (called no DA). We also included the results of the ablation study, where we have set some of the weights of the generator losses to zero and the results using histogram matching [36] for domain adaptation as in [47]. In addition, results are compared with the FCNN trained in original Proba-V images and ground truths (PV-trained), which serves as an upper bound reference, and with the operational Proba-V cloud detection algorithm (v101) [27].…”
Section: B Domain Adaptation For Cloud Detectionmentioning
confidence: 99%
“…Table II shows the results in the target domain (Proba-V) using the PV24 dataset with the trained DA transformation G PV→LU (called full DA in the table) and without it (called no DA). We also included the results of the ablation study, where we have set some of the weights of the generator losses to zero and the results using histogram matching [36] for domain adaptation as in [47]. In addition, results are compared with the FCNN trained in original Proba-V images and ground truths (PV-trained), which serves as an upper bound reference, and with the operational Proba-V cloud detection algorithm (v101) [27].…”
Section: B Domain Adaptation For Cloud Detectionmentioning
confidence: 99%
“…However, because RS is a vast field, it is tricky to classify such studies. As described by Tuia et al [37], some methods are based on selecting invariant features on the training data [38], [39], whereas others are based on the adaptation of the data distribution [40]- [42], and lastly building the adaptation in the classifier. The first type of method may be time-consuming because of the difficulty of selecting invariant data, which might require building a new dataset for each target data point.…”
Section: Related Workmentioning
confidence: 99%
“…Then, the translated dataset is used to fine-tune the segmentation model to enhance its ability to treat images from the target domain. The second work is the work proposed by Tasar et al [37]. They adopted the same algorithm of Benjdira et al [4] but changed the GAN architecture into another architecture named ColorMapGAN.…”
Section: Related Workmentioning
confidence: 99%