In this paper a novel technique for building change detection from remote sensing imagery is presented. It includes two main stages: (1) Object-specific discriminative features are extracted using Morphological Building Index (MBI) to automatically detect the existence of buildings in remote sensing images. (2) Pixel-based image matching is measured on the basis of Mutual Information (MI) of the images by Normalized Mutual Information (NMI). Here, the MBI features values are computed for each of the pair images taken over the same region at two different times and then changes in these two MBI images are measured to indicate the building change. MI is estimated locally for all the pixels for image matching and then thresholding is applied for eliminating those pixels which are responsible for strong similarity. Finally, after getting the MBI and NMI images, a further fusion of these two images is done for refinement of the change result. For evaluation purpose, the experiments are carried on QuickBird, IKONOS images and images taken from Google Earth. The results show that the proposed technique can attain acceptable correctness rates above 90% with Overall Accuracy (OA) 89.52%.
Automatic extraction of buildings from High-Resolution Remote Sensing (RS) Imagery is of great practical interest for numerous applications; including urban planning, change detection, Disaster management, estimation of human population, and many other geospatial related applications. This paper proposes a novel efficient Improved ResU-Netarchitecture called IRU-Net, integrating spatial pyramid pooling module with an encoder-decoder structure, in combination with Atrous convolutions, modified residual connections, and a new skip connection between the encoder-decoder features for automatic extraction of buildings from RS images. Moreover, a new dual loss function called binary cross-entropy-dice-loss (BCEDL) is opted that make cross-entropy (CE) and dice loss (DL) and consider both local information and global information to decrease the class imbalance influence and improve the building extraction results. The proposed model is examined to demonstrate its generalization on two publicly available datasets; the Aerial Images for Roof Segmentation (AIRS) Dataset and the Massachusetts buildings dataset. The proposed IRU-Net achieved an average F-1 accuracy of 92.30% for the Massachusetts dataset and 95.65% for the AIRS dataset. When compared to other state-of-the-art deep learning-based models such as SegNet, U-Net, E-Net, ERFNet and SRI-Net, the overall accuracy improvements of our IRU-Net model is 9.0% (0.9725 vs. 0.8842), 5.2% (0.9725 vs. 0.9218), 3.0% (0.9725 vs. 0.9428), 1.4% (0.9725 vs. 0.9588) and 0.93% (0.9725 vs. 0.9635), for AIRS dataset and 11.6%, 5.9%, 3.1%, 2.7% and 1.4%, for Massachusetts dataset. These results prove the superiority of the proposed model for building extraction from high-resolution RS images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.