This paper studies a new Bayesian unmixing algorithm for hyperspectral images. Each pixel of the image is modeled as a linear combination of so-called endmembers. These endmembers are supposed to be random in order to model uncertainties regarding their knowledge. More precisely, we model endmembers as Gaussian vectors whose means have been determined using an endmember extraction algorithm such as the famous N-FINDR or VCA algorithms. This paper proposes to estimate the mixture coefficients (referred to as abundances) using a Bayesian algorithm. Suitable priors are assigned to the abundances in order to satisfy positivity and additivity constraints whereas conjugate priors are chosen for the remaining parameters. A hybrid Gibbs sampler is then constructed to generate abundance and variance samples distributed according to the joint posterior of the abundances and noise variances. The performance of the proposed methodology is evaluated by comparison with other unmixing algorithms on synthetic and real images. Index TermsBayesian inference, Monte Carlo methods, spectral unmixing, hyperspectral images, normal compositional model.
Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.
The concept of delay/Doppler (DD) altimetry (DDA) has been under study since the mid-1990s, aiming at reducing the measurement noise and increasing the along-track resolution in comparison with the conventional pulse-limited altimetry. This paper introduces a new model for the mean backscattered power waveform acquired by a radar altimeter operating in synthetic aperture radar mode, as well as an associated least squares (LS) estimation algorithm. As in conventional altimetry (CA), the mean power can be expressed as the convolution of three terms: the flat surface impulse response (FSIR), the probability density function of the heights of the specular scatterers, and the time/frequency point target response of the radar. An important contribution of this paper is to derive an analytical formula for the FSIR associated with DDA. This analytical formula is obtained for a circular antenna pattern, no mispointing, no vertical speed effect, and a uniform scattering. The double convolution defining the mean echo power can then be computed numerically, resulting in a 2-D semi-analytical model called the DD map (DDM). This DDM depends on three altimetric parameters: the epoch, the sea surface wave height, and the amplitude. A multi-look model is obtained by summing all the reflected echoes from the same along-track surface location of interest after applying appropriate delay compensation (range migration) to align the DDM on the same reference. The second contribution of this paper concerns the estimation of the parameters associated with the multi-look semi-analytical model. An LS approach is investigated by means of the Levenberg-Marquardt algorithm. Simulations conducted on simulated altimetric waveforms allow the performance of the proposed estimation algorithm to be appreciated. The analysis of Cryosat-2 waveforms shows an improvement in parameter estimation when compared to the CA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.