In this paper, we propose a novel method for the joint classification of both multidate and multiresolution remote sensing imagery, which represents an important and relatively unexplored classification problem. The proposed classifier is based on an explicit hierarchical graph-based model that is sufficiently flexible to address a coregistered time series of images collected at different spatial resolutions. Within this framework, a novel element of the proposed approach is the use of multiple quadtrees in cascade, each associated with the images available at each observation date in the considered time series. For each date, the input images are inserted in a hierarchical structure on the basis of their resolutions, whereas missing levels are filled in with wavelet transforms of the images embedded in finer-resolution levels. This approach is aimed at both exploiting multiscale information, which is known to play a crucial role in high-resolution image analysis, and supporting input images acquired at different resolutions in the input time series. The experimental results are shown for multitemporal and multiresolution optical data
In this paper, a hierarchical probabilistic graphical model is proposed to tackle joint classification of multiresolution and multisensor remote sensing images of the same scene. This problem is crucial in the study of satellite imagery and jointly involves multiresolution and multisensor image fusion. The proposed framework consists of a hierarchical Markov model with a quadtree structure to model information contained in different spatial scales, a planar Markov model to account for contextual spatial information at each resolution, and decision tree ensembles for pixelwise modeling. This probabilistic graphical model and its topology are especially fit for application to very high resolution (VHR) image data. The theoretical properties of the proposed model are analyzed: the causality of the whole framework is mathematically proved, granting the use of time-efficient inference algorithms such as the marginal posterior mode criterion, which is non-iterative when applied to quadtree structures. This is mostly advantageous for classification methods linked to multiresolution tasks formulated on hierarchical Markov models. Within the proposed framework, two multimodal classification algorithms are developed, that incorporate Markov mesh and spatial Markov chain concepts. The results obtained in the experimental validation conducted with two datasets containing VHR multispectral, panchromatic, and radar satellite images, verify the effectiveness of the proposed framework. The proposed approach is also compared to previous methods that are based on alternate strategies for multimodal fusion.
This letter proposes two methods for the supervised classification of multisensor optical and SAR images with possibly different spatial resolutions. Both methods are formulated within a unique framework based on hierarchical Markov random fields. Distinct quad-trees associated with the individual information sources are defined to jointly address multisensor, multiresolution, and possibly multifrequency fusion, and are integrated with finite mixture models and the marginal posterior mode criterion. Experimental validation is conducted with Pléiades, COSMO-SkyMed, RADARSAT-2, and GeoEye-1 data.
In this paper, the problem of the classification of multiresolution and multisensor remotely sensed data is addressed by proposing a multiscale Markov mesh model. Multiresolution and multisensor fusion are jointly achieved through an explicitly hierarchical probabilistic graphical classifier, which uses a quadtree structure to model the interactions across different spatial resolutions, and a symmetric Markov mesh random field to deal with contextual information at each scale and favor applicability to very high resolution imagery. Differently from previous hierarchical Markovian approaches, here, data collected by distinct sensors are fused through either the graph topology itself (across its layers) or decision tree ensemble methods (within each layer). The proposed model allows taking benefit of strong analytical properties, most remarkably causality, which make it possible to apply time-efficient non-iterative inference algorithms.
In this paper, we address the problem of the joint classification of multiple images acquired on the same scene at different spatial resolutions. From an application viewpoint, this problem is of importance in several contexts, including, most remarkably, satellite and aerial imagery. From a methodological perspective, we use a probabilistic graphical approach and adopt a hierarchical Markov mesh framework that we have recently developed and models the spatial-contextual classification of multiresolution and possibly multisensor images. Here, we focus on the methodological properties of this framework. First, we prove the causality of the model, a highly desirable property with respect to the computational cost of the inference. Then, we prove the expression of the marginal posterior mode criterion for this model and discuss the related assumptions. Experimental results with multispectral and panchromatic satellite images are also presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.