2019
DOI: 10.48550/arxiv.1901.08236
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reciprocal Translation between SAR and Optical Remote Sensing Images with Cascaded-Residual Adversarial Networks

Abstract: Despite the advantages of all-weather and all-day high-resolution imaging, synthetic aperture radar (SAR) images are much less viewed and used by general people because human vision is not adapted to microwave scattering phenomenon. However, expert interpreters can be trained by comparing side-by-side SAR and optical images to learn the mapping rules from SAR to optical. This paper attempts to develop machine intelligence that are trainable with large-volume co-registered SAR and optical images to translate SA… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…Secondarily, the dissimilarity between SAR and optical images impedes their fusion, particularly in areas where ground features undergo significant changes. In order to illustrate this, the results in S2O translation [3,27,29,45] are better in mountains, rivers, forests, farmland, and other natural scenes, whereas man-made scenes like buildings and vehicles are hard to restore. In this paper, the high-resolution SAR-optical image dataset of targets is further expanded to reduce the difficulty of image fusion, and a carefully designed image fusion network is explored to facilitate high-quality SAR-optical image bidirectional translation of targets.…”
Section: Sar-optical Image Fusionmentioning
confidence: 99%
“…Secondarily, the dissimilarity between SAR and optical images impedes their fusion, particularly in areas where ground features undergo significant changes. In order to illustrate this, the results in S2O translation [3,27,29,45] are better in mountains, rivers, forests, farmland, and other natural scenes, whereas man-made scenes like buildings and vehicles are hard to restore. In this paper, the high-resolution SAR-optical image dataset of targets is further expanded to reduce the difficulty of image fusion, and a carefully designed image fusion network is explored to facilitate high-quality SAR-optical image bidirectional translation of targets.…”
Section: Sar-optical Image Fusionmentioning
confidence: 99%
“…With the development of deep learning [19][20][21], the Generative Adversarial Network (GAN) has received increasing attention in remote sensing due to its superior performance in data generation [22]. Many researchers have since tried to ingest Optical-SAR images into a Conditional Generative Adversarial Network (cGAN) [23], a Cycle-consistent Adversarial Network (CycleGAN) [24], and other GAN models [25,26]. There are two modules in the GAN-based model, one is the Generator, which is used to extract features from input data to generate the simulated images, and the other is the Discriminator, which judges whether the simulated images are real.…”
Section: Introductionmentioning
confidence: 99%
“…That means that, even if we have filtered the cloud percentage and have selected some images with scattered clouds, such as these corrupted images, there is still a possibility that some split small images are full of thin or thick clouds. Meanwhile, this approach needs these corrupted images as input, which means it is not able to generate cloud-free optical images at other time phases when there are no optical images captured by satellite [25,40].…”
Section: Introductionmentioning
confidence: 99%