2018
DOI: 10.1109/lgrs.2018.2823425
|View full text |Cite
|
Sign up to set email alerts
|

Sparsity-Constrained Deep Nonnegative Matrix Factorization for Hyperspectral Unmixing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 20 publications
0
11
0
Order By: Relevance
“…where P refers to the proximal mapping. Secondly, we apply the forward-backward splitting framework to the DGsnMF problem, and ( 4) is rewritten as the minimization problem, min (9) where…”
Section: Forward-backward Splitting For Dgsnmfmentioning
confidence: 99%
See 1 more Smart Citation
“…where P refers to the proximal mapping. Secondly, we apply the forward-backward splitting framework to the DGsnMF problem, and ( 4) is rewritten as the minimization problem, min (9) where…”
Section: Forward-backward Splitting For Dgsnmfmentioning
confidence: 99%
“…Feng et al [10] present a sparsity-constrained deep nonnegative matrix factorization by enforcing L 1/2 constraint [18] and total variation for hyperspectral unmixing. It is worth noting that the deep matrix factorization methods obtain outstanding performance in many fields, including multi-view learning [38], remote sensing image processing [9], community detection [34], etc. However, the DMF with lots of variables is much more complex than the traditional NMF.…”
Section: Introductionmentioning
confidence: 99%
“…The basic flowchart of the LBP-NAPCA pre-processing can be depicted as Figure 2. Plugging the pre-obtained candidate spatial information into the spatial regularization sparse unmixing model as an improved weight matrix, the propose algorithm can be constructed as (16). To solve the optimization problem (16) of the proposed method, alternating direction method of multipliers (ADMM) is used following the references [28,68], and we would revisit the whole optimization procedure in detail.…”
Section: Lbg-napca-based Sparse Unmixingmentioning
confidence: 99%
“…Traditionally, hyperspectral unmixing is divided into two stages, endmember selection and fractional abundance estimation [12]. In addition, blind source separation (BSS)-based unsupervised unmixing methods [13][14][15][16], such as independent component analysis [13] or non-negative matrix/tensor factorization based hyperspectral unmixing [14][15][16], also have been proven to be able to unmix highly mixed datasets and achieve comparable unmixing accuracy. However, due to the separation of the two stages in traditional spectral unmixing methods, the endmember estimation errors and fractional abundances estimation errors might be accumulated [17], which would lead to poor unmixing results; while the unsupervised unmixing techniques might also fail since they could extract virtual endmembers without any physical meaning, or these methods can only work under the hypothesis that there is pure pixel existing, which is hard to guarantee in reality [18][19][20][21].…”
Section: Introductionmentioning
confidence: 99%
“…Since the traditional single-layer NMF may produce incorrect unmixing results, Feng et al [43] extended the single-layer NMF to deep NMF, which promoted the smooth segmentation of abundances. Besides, sparse constraints could be further enforced on each layer of the deep NMF [44]. The idea of subspace clustering has also been drawn into the NMF framework for unmixing [45], [46].…”
Section: Introductionmentioning
confidence: 99%