In this paper, we present a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also propose a composite fusion architecture that fuses features throughout the network. The methods were validated on four different datasets: ISPRS Potsdam, Vaihingen, IEEE Zeebruges and Sentinel-1, Sentinel-2 dataset. For the Sentinel-1,-2 datasets, we obtain the ground truth labels for three classes from OpenStreetMap. Results on all the images show early fusion, specifically after layer three of the network, achieves results similar to or better than a decision level fusion mechanism. The performance of the proposed architecture is also on par with the state-of-the-art results.
We propose an unsupervised algorithm that utilizes information derived from spectral, gradient, and textural attributes for spatially segmenting multi/hyperspectral remotely sensed imagery. Our methodology commences by determining the magnitude of spectral intensity variations across the input scene, using a multiband gradient detection scheme optimized for handling remotely sensed image data. The resultant gradient map is employed in a dynamic region growth process that is initiated in pixel locations with small gradient magnitudes and is concluded at sites with large gradient magnitudes, yielding a map comprised of an initial set of regions. This region map is combined with several co-occurrence matrix-derived textural descriptors along with intensity and gradient features in a multivariate analysis-based region merging procedure that fuses the regions with similar characteristics to yield the final segmentation output. Our approach was tested on several multi/hyperspectral datasets, and the results show a favorable performance in comparison with state-of-the-art techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.