Hyperspectral unmixing is a crucial preprocessing step for material classification and recognition. In the last decade, nonnegative matrix factorization (NMF) and its extensions have been intensively studied to unmix hyperspectral imagery and recover the material end-members. As an important constraint for NMF, sparsity has been modeled making use of the L1 regularizer. Unfortunately, the L1 regularizer cannot enforce further sparsity when the full additivity constraint of material abundances is used, hence, limiting the practical efficacy of NMF methods in hyperspectral unmixing. In this paper, we extend the NMF method by incorporating the L 1/2 sparsity constraint, which we name L 1/2 -NMF. The L 1/2 regularizer not only induces sparsity, but is also a better choice among Lq(0 < q < 1) regularizers. We propose an iterative estimation algorithm for L 1/2 -NMF, which provides sparser and more accurate results than those delivered using the L1 norm. We illustrate the utility of our method on synthetic and real hyperspectral data and compare our results to those yielded by other state-of-the-art methods.
Many spectral unmixing approaches ranging from geometry, algebra to statistics have been proposed, in which nonnegative matrix factorization (NMF) based ones form an important family. The original NMF based unmixing algorithm loses the spectral and spatial information between mixed pixels when stacking the spectral responses of the pixels into an observed matrix. Therefore, various constrained NMF methods are developed to impose spectral structure, spatial structure, and spectral-spatial joint structure into NMF to enforce the estimated endmembers and abundances preserve these structures. Compared with matrix format, the third-order tensor is more natural to represent a hyperspectral data cube as a whole, by which the intrinsic structure of hyperspectral imagery can be losslessly retained. Extended from NMF based methods, a matrix-vector nonnegative tensor factorization (NTF) model is proposed in this paper for spectral unmixing. Different from widely used tensor factorization models such as Canonical Polyadic decomposition (CPD) and Tucker decomposition, the proposed method is derived from block term decomposition (BTD) which is a combination of CPD and Tucker decomposition. This leads to a more flexible frame to model various application-dependent problems. The matrix-vector NTF decomposes a third-order tensor into the sum of several component tensors, with each component tensor being the outer product of a vector (endmember) and a matrix (corresponding abundances). From a formal perspective, this tensor decomposition is consistent with linear spectral mixture model. From an informative perspective, the structures within spatial domain, within spectral domain, and cross spectralspatial domain are retreated interdependently. Experiments demonstrate that the proposed method has outperformed several state-of-the-art NMF based unmixing methods.
Hyperspectral remote sensing imagery contains rich information on spectral and spatial distributions of distinct surface materials. Owing to its numerous and continuous spectral bands, hyperspectral data enables more accurate and reliable material classification than using panchromatic or multispectral imagery. However, high-dimensional spectral features and limited number of available training samples have caused some difficulties in the classification, such as overfitting in learning, noise sensitiveness, overloaded computation, and lack of meaningful physical interpretability. In this paper, we propose a hyperspectral feature extraction and pixel classification method based on structured sparse logistic regression and three-dimensional discrete wavelet transform (3D-DWT) texture features. The 3D-DWT decomposes a hyperspectral data cube at different scales, frequencies and orientations, during which the hyperspectral data cube is considered as a whole tensor instead of adapting the data to a vector or matrix. This allows capture of geometrical and statistical spectral-spatial structures. After feature extraction step, sparse representation/modeling is applied for data analysis and processing via sparse regularized optimization, which selects a small subset of the original feature variables to model the data for regression and classification purpose. A linear structured sparse logistic regression model is proposed to simultaneously select the discriminant features from the pool of 3D-DWT texture features and learn the coefficients of linear classifier, in which the prior knowledge about feature structure can be mapped into the various sparsity-inducing norms such as lasso, group and sparse group lasso. Furthermore, to overcome the limitation of linear models, we extended the linear sparse model to nonlinear classification by partitioning the feature space into subspaces of linearly separable samples. The advantages of our methods are validated on the real hyperspectral remote sensing datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.