With the rapid development of the remote sensing monitoring and computer vision technology, the deep learning method has made a great progress to achieve applications such as earth observation, climate change and even space exploration. However, the model trained on existing data cannot be directly used to handle the new remote sensing data, and labeling the new data is also time-consuming and labor-intensive. Unsupervised Domain Adaptation (UDA) is one of the solutions to the aforementioned problems of labeled data defined as the source domain and unlabeled data as the target domain, i.e., its essential purpose is to obtain a well-trained model and tackle the problem of data distribution discrepancy defined as the domain shift between the source and target domain. There are a lot of reviews that have elaborated on UDA methods based on natural data, but few of these studies take into consideration thorough remote sensing applications and contributions. Thus, in this paper, in order to explore the further progress and development of UDA methods in remote sensing, based on the analysis of the causes of domain shift, a comprehensive review is provided with a fine-grained taxonomy of UDA methods applied for remote sensing data, which includes Generative training, Adversarial training, Self-training and Hybrid training methods, to better assist scholars in understanding remote sensing data and further advance the development of methods. Moreover, remote sensing applications are introduced by a thorough dataset analysis. Meanwhile, we sort out definitions and methodology introductions of partial, open-set and multi-domain UDA, which are more pertinent to real-world remote sensing applications. We can draw the conclusion that UDA methods in the field of remote sensing data are carried out later than those applied in natural images, and due to the domain gap caused by appearance differences, most of methods focus on how to use generative training (GT) methods to improve the model’s performance. Finally, we describe the potential deficiencies and further in-depth insights of UDA in the field of remote sensing.
Sea fog detection is a challenging and essential issue in satellite remote sensing. Although conventional threshold methods and deep learning methods can achieve pixel-level classification, it is difficult to distinguish ambiguous boundaries and thin structures from the background. Considering the correlations between neighbor pixels and the affinities between superpixels, a correlation context-driven method for sea fog detection is proposed in this letter, which mainly consists of a two-stage superpixel-based fully convolutional network (SFCNet), named SFCNet. A fully connected Conditional Random Field (CRF) is utilized to model the dependencies between pixels. To alleviate the problem of high cloud occlusion, an attentive Generative Adversarial Network (GAN) is implemented for image enhancement by exploiting contextual information. Experimental results demonstrate that our proposed method achieves 91.65% mIoU and obtains more refined segmentation results, performing well in detecting fogs in small, broken bits and weak contrast thin structures, as well as detects more obscured parts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.