2018
DOI: 10.1109/lgrs.2018.2841400
|View full text |Cite
|
Sign up to set email alerts
|

Stacked Nonnegative Sparse Autoencoders for Robust Hyperspectral Unmixing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
45
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 100 publications
(45 citation statements)
references
References 12 publications
0
45
0
Order By: Relevance
“…Three of the comparison methods are based on deep learning. The comparison methods are vertex component analysis (VCA) [11], sparsity constrained nonnegative matrix factorization ( 1/2 -NMF) [13], sticky hierarchical Dirichlet process (SHDP) [18] which is a spatial-spectral blind unmixing method, spatial group sparsity regularized nonnegative matrix factorization (SGSRNMF) which is a spatial-spectral blind unmixing method [42], deep autoencoder unmixing (DAEU), an autoencoder based method described in [6], stacked nonnegative sparse autoencoder (SNSA) unmixing method described in [47], and an untied denoising autoencoder with sparsity (uDAS) unmixing method described in [48].…”
Section: Methodsmentioning
confidence: 99%
“…Three of the comparison methods are based on deep learning. The comparison methods are vertex component analysis (VCA) [11], sparsity constrained nonnegative matrix factorization ( 1/2 -NMF) [13], sticky hierarchical Dirichlet process (SHDP) [18] which is a spatial-spectral blind unmixing method, spatial group sparsity regularized nonnegative matrix factorization (SGSRNMF) which is a spatial-spectral blind unmixing method [42], deep autoencoder unmixing (DAEU), an autoencoder based method described in [6], stacked nonnegative sparse autoencoder (SNSA) unmixing method described in [47], and an untied denoising autoencoder with sparsity (uDAS) unmixing method described in [48].…”
Section: Methodsmentioning
confidence: 99%
“…In addition, we also provide the test results obtained with NMF-based methods, NMF-sp [45] and collaborative nonnegative matrix factorization (CNMF) [46]) and topic-based methods, PLSA [33], PLSA-sp [22], which introduces a sparsity constraint over the documents, and LDA [32], as shown in the latest papers [23]. To further illustrate the effectiveness of the proposed method for complex datasets, the results of some recent methods, a deep autoencoder network (DAEN) [16], a stacked nonnegative sparse autoencoder (SNSA) [18], a deep autoencoder unmixing (DAEU) [47], and dyadic cyclic descent optimization (DCD) [48], are shown in the results of the Jasper and Urban datasets [16]- [19].…”
Section: A Experimental Settingsmentioning
confidence: 99%
“…To tackle nonlinearities, in [16], nonlinear kernelized NMF was presented. More recently, unmixing based on autoencoders [17][18][19][20][21][22] are employed to estimate endmembers and fractional abundances simultaneously. However, these algorithms are either limited to the linear mixing model or implementation of an existing nonlinear (bilinear) model to the autoencoder framework.…”
Section: Introductionmentioning
confidence: 99%