Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods.
Land use classification is a fundamental task of information extraction from remote sensing imagery. Semantic segmentation based on deep convolutional neural networks (DCNNs) has shown outstanding performance in this task. However, these methods are still affected by the loss of spatial features. In this study, we proposed a new network, called the dense-coordconv network (DCCN), to reduce the loss of spatial features and strengthen object boundaries. In this network, the coordconv module is introduced into the improved DenseNet architecture to improve spatial information by putting coordinate information into feature maps. The proposed DCCN achieved an obvious performance in terms of the public ISPRS (International Society for Photogrammetry and Remote Sensing) 2D semantic labeling benchmark dataset. Compared with the results of other deep convolutional neural networks (U-net, SegNet, Deeplab-V3), the results of the DCCN method improved a lot and the OA (overall accuracy) and mean F1 score reached 89.48% and 86.89%, respectively. This indicates that the DCCN method can effectively reduce the loss of spatial features and improve the accuracy of semantic segmentation in high resolution remote sensing imagery.
Building segmentation is a classical and challenging task in high-resolution remote sensing imagery. This approach has achieved remarkable performance based on a fully convolutional network (FCN) with adequate pixel-wise annotations. However, due to differences in sensor technology as well as appearance in different regions, datasets gathered from these various sources are quite distinct, and dense annotations for a particular area are not always available. Thus, directly applying a segmentation model trained on one dataset (source domain) to another unseen dataset (target domain) usually results in a drop in performance, called the domain gap. In this paper, we propose a weakly-supervised domain adaptation method using adversarial entropy for building segmentation to address this problem. First, we use an adversarial entropy strategy to decrease the entropy and improve the prediction certainty for target images, causing the distributions between the source and target domains to become closer to each other. Second, we propose a simple and effective self-training strategy for the target domain that produces high-confidence predictions using pseudo-labels. We use a series of thresholds to generate the pseudo-labels without introducing extra parameters. This strategy effectively enhances the discriminability of the target domain and further minimizes the distribution discrepancy between the two domains. Experiments on cross-domain aerial datasets have demonstrated the effectiveness and superiority of our proposed method when compared to other state-of-the-art methods.
Thermodynamic parameters play a crucial role in determining polar sea ice thickness (SIT); however, modeling their relationship is difficult due to the complexity of the influencing mechanisms. In this study, we propose a self-attention convolutional neural network (SAC-Net), which aims to model the relationship between thermodynamic parameters and SIT more parsimoniously, allowing us to estimate SIT directly from these parameters. SAC-Net uses a fully convolutional network as a baseline model to detect the spatial information of the thermodynamic parameters. Furthermore, a self-attention block is introduced to enhance the correlation among features. SAC-Net was trained on a dataset of SIT observations and thermodynamic data from the 2012–2019 freeze-up period, including surface upward sensible heat flux, surface upward latent heat flux, 2 m temperature, skin temperature, and surface snow temperature. The results show that our neural network model outperforms two thermodynamic-based SIT products in terms of accuracy and can provide reliable estimates of SIT. This study demonstrates the potential of the neural network to provide accurate and automated predictions of Arctic winter SIT from thermodynamic data, and, thus, the network can be used to support decision-making in certain fields, such as polar shipping, environmental protection, and climate science.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.