2019
DOI: 10.1109/tgrs.2018.2873326
|View full text |Cite
|
Sign up to set email alerts
|

Joint-Sparse-Blocks and Low-Rank Representation for Hyperspectral Unmixing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
79
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 87 publications
(79 citation statements)
references
References 63 publications
0
79
0
Order By: Relevance
“…Similarly, we multiply both sides by S on the equation (18) and substitute equation (19) into equation (18), the abundance matrix S can be updated as:…”
Section: B Optimizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Similarly, we multiply both sides by S on the equation (18) and substitute equation (19) into equation (18), the abundance matrix S can be updated as:…”
Section: B Optimizationmentioning
confidence: 99%
“…Some of these focus on the endmember extraction from statistical and geometrical aspects, such as Pixel Purity Index [13], N-FINDR [14], alternating projected subgradients [15], Vertex Component Analysis [16], independent component analysis [17], and minimum-volume-based unmixing algorithms [18], etc. Other methods address the problem of abundance estimation under the assumption that the endmembers are available [19]. With the almost universal success of deep learning, there are also examples of deep neural network based hyperspectral unmixing methods [20]- [22].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, they require appropriate computation equipment or parameters to maintain their performance, which is not always guaranteed and efficient. LMM assumption-based algorithms have more clear conceptual meaning to easily capture endmember extraction and abundance estimation, owing to multiple priors of data matrices such as sparse [7], [8], low-rank [9], and geometric [10] properties, which have attracted considerable attention [2], [3].…”
Section: Introductionmentioning
confidence: 99%
“…end for 15: end function Therefore, we propose the SENMAV algorithm, which identifies the endmembers by simultaneously considering its spectral features and spatial contextual. The endmembers in our SENMAV framework (10) obtain their spectral information under the data simplex via a maximum simplex volume framework (8) and (9), and the spatial information of the endmembers is acquired by means of the spatial energy prior (7). It is worth mentioning that both terms (i.e., spatial energy prior and maximum simplex volume) have different data scales and make different contributions to select endmembers.…”
mentioning
confidence: 99%