2021
DOI: 10.1109/tsp.2021.3096806
|View full text |Cite
|
Sign up to set email alerts
|

Identifiability-Guaranteed Simplex-Structured Post-Nonlinear Mixture Learning via Autoencoder

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…Therefore, computing α (t) and β (t) takes O(R 3 ) flops, but R is normally small. In the AP solver, (19b) costs O(IJR log R) flops via the water-filling type algorithm, the SVD in (20) takes O(IJLR), and the projection in (21) takes O(max{I, J} 3 R + min{I, J} log min{I, J}).…”
Section: E Computational Complexitymentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, computing α (t) and β (t) takes O(R 3 ) flops, but R is normally small. In the AP solver, (19b) costs O(IJR log R) flops via the water-filling type algorithm, the SVD in (20) takes O(IJLR), and the projection in (21) takes O(max{I, J} 3 R + min{I, J} log min{I, J}).…”
Section: E Computational Complexitymentioning
confidence: 99%
“…Interested readers are referred to surveys in [18], [19]. More recent developments using neural networks can be found in the literature as well; see, e.g., [20], [21]. Nonetheless, we focus on the more classic and more widely used LMM in this work.…”
Section: Introductionmentioning
confidence: 99%
“…In fact, numerical evidence shows that the LMM can oftentimes explain the vast majority of pixels with high accuracy; see, e.g., [15]. It is worth noting that many nonlinear mixture models are also proposed for HU; see [16] and recent developments using neural networks, e.g., [17]. Nonlinear models are used to capture the complex dynamics and data generating mechanisms that could not be interpreted by the LMM, which reduces modeling errors and enhances the HU performance.…”
mentioning
confidence: 99%
“…Nonlinear models are used to capture the complex dynamics and data generating mechanisms that could not be interpreted by the LMM, which reduces modeling errors and enhances the HU performance. However, using nonlinear models is not without price-the computational problems under nonlinear models are in general much harder to tackle (especially when deep neural networks are involved); see, e.g., [17]. In fact, the LMM often allows us to design algorithms that strike a good balance between modeling accuracy and computational convenience.…”
mentioning
confidence: 99%
“…A supervised autoencoder was used in [24] for Fan, bilinear, and PPNM, in which radial basis function (RBF) kernels and K-means clustering were used for the estimation of the number of endmembers and the endmember spectra, respectively. Most DL-based nonlinear unmixing techniques are autoencoder-based architectures based on PPNM and, therefore suffer from the drawbacks of the bilinear models mentioned above [25], [26], [27]. A deep autoencoder was proposed in [28] where the encoder utilizes an extra nonlinear layer to model the nonlinearity in the data.…”
mentioning
confidence: 99%