2023
DOI: 10.1109/tnnls.2021.3114203
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Autoencoder Network for Hyperspectral Unmixing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(17 citation statements)
references
References 41 publications
0
17
0
Order By: Relevance
“…They impose restrictions on the encoder to be able to invert the mixing process by making use of the pseudoinverse of the endmember matrix, with both encoder and decoder being composed of sequential fully-connected layers. Another set of autoencoder techniques for unmixing is based on adversarial training, such as in [26]. It is based on the assumption that pixels in the same region share the same statistical properties and can be modeled with a prior distribution.…”
Section: Nonlinear Models Using Autoencodersmentioning
confidence: 99%
“…They impose restrictions on the encoder to be able to invert the mixing process by making use of the pseudoinverse of the endmember matrix, with both encoder and decoder being composed of sequential fully-connected layers. Another set of autoencoder techniques for unmixing is based on adversarial training, such as in [26]. It is based on the assumption that pixels in the same region share the same statistical properties and can be modeled with a prior distribution.…”
Section: Nonlinear Models Using Autoencodersmentioning
confidence: 99%
“…There are three methods based on manifold learning: isometric feature mapping (ISOMAP) (Moradzadeh et al, 2020), multidimensional dimension transformation (MDT) (Leprince et al, 2021), local linear embedding (LLE), laplacian eigenmaps (LE), etc. The neural network methods include autoencoder networks (AN) (Jin et al, 2021) and self-organizing feature mapping (SOM) (Ghahramani et al, 2021). Different principles and structures of each dimension reduction algorithm will bring different recognition effects.…”
Section: Emotion Feature Dimension Reductionmentioning
confidence: 99%
“…The first stage network estimates the endmembers and abundance maps of the input image while the second stage reconstructs the input image. In [43], autoencoders that have been used for hyperspectral unmixing are grouped into five different categories: (a) Sparse nonnegative autoencoders (a stack of nonnegative sparse autoencoders (SNSA)) [44] (b) Variational autoencoders (Deep AutoEncoder Network (DAEN) [45], Deep Generative Unmixing algorithm (DeepGUn) [46] (c) Adversarial autoencoders (Adversarial autoencoder network (AAENet)) [47]- [49] (d) Denoising autoencoders (an untied Denoising Autoencoder with Sparsity (uDAS)) [50], and (e) Convolutional autoencoders [51]- [53]. In [54], a two-stream Siamese deep network was proposed to enhance the performance of spectral unmixing.…”
Section: Introductionmentioning
confidence: 99%