Because of the limitations of hardware devices, such as the sensors, processing capacity, and high accuracy altitude control equipment, traditional optical remote sensing (RS) imageries capture information regarding the same scene from mostly one single angle or a very small number of angles. Nowadays, with video satellites coming into service, obtaining imageries of the same scene from a more-or-less continuous array of angles has become a reality. In this paper, we analyze the differences between the traditional RS data and continuous multi-angle remote sensing (CMARS) data, and unravel the characteristics of the CMARS data. We study the advantages of using CMARS data for classification and try to capitalize on the complementarity of multi-angle information and, at the same time, to reduce the embedded redundancy. Our arguments are substantiated by real-life experiments on the employment of CMARS data in order to classify urban land covers while using a support vector machine (SVM) classifier. They show the superiority of CMARS data over the traditional data for classification. The overall accuracy may increase up to about 9% with CMARS data. Furthermore, we investigate the advantages and disadvantages of directly using the CMARS data, and how such data can be better utilized through the extraction of key features that characterize the variations of spectral reflectance along the entire angular array. This research lay the foundation for the use of CMARS data in future research and applications.
Remote sensing is an important means to monitor the dynamics of the earth surface. It is still challenging for single-sensor systems to provide spatially high resolution images with high revisit frequency because of the technological limitations. Spatiotemporal fusion is an effective approach to obtain remote sensing images high in both spatial and temporal resolutions. Though dictionary learning fusion methods appear to be promising for spatiotemporal fusion, they do not consider the structure similarity between spectral bands in the fusion task. To capitalize on the significance of this feature, a novel fusion model, named the adaptive multi-band constraints fusion model (AMCFM), is formulated to produce better fusion images in this paper. This model considers structure similarity between spectral bands and uses the edge information to improve the fusion results by adopting adaptive multi-band constraints. Moreover, to address the shortcomings of the ℓ 1 norm which only considers the sparsity structure of dictionaries, our model uses the nuclear norm which balances sparsity and correlation by producing an appropriate coefficient in the reconstruction step. We perform experiments on real-life images to substantiate our conceptual augments. In the empirical study, the near-infrared (NIR), red and green bands of Landsat Enhanced Thematic Mapper Plus (ETM+) and Moderate Resolution Imaging Spectroradiometer (MODIS) are fused and the prediction accuracy is assessed by both metrics and visual effects. The experiments show that our proposed method performs better than state-of-the-art methods. It also sheds light on future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.