2016
DOI: 10.1117/12.2245011
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised classification of hyperspectral imagery based on stacked autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 7 publications
0
8
0
Order By: Relevance
“…Within this perspective, auto-encoders allow to learn a smart compression with minimal information loss, for example more efficient than a standard PCA. Thus [46] and later [47] proposed dimension reduction through cascade of auto-encoders for denoising followed by classification with a simple perceptron.…”
Section: B Spectral Classificationmentioning
confidence: 99%
“…Within this perspective, auto-encoders allow to learn a smart compression with minimal information loss, for example more efficient than a standard PCA. Thus [46] and later [47] proposed dimension reduction through cascade of auto-encoders for denoising followed by classification with a simple perceptron.…”
Section: B Spectral Classificationmentioning
confidence: 99%
“…In this section, seven different methods are used to verify the effectiveness of the proposed method, including EMAPs [8], 3D CNN [37], semi-supervised stacked autoencoders (semiSAE) [32], semi-supervised multinomial logistic regression (semiMLR) [26], maximizer of the posterior marginal by loopy belief propagation with AL (MPM-LBP-AL) [30], Mahalanobis metric for clustering (MMC) [44] and perceptual loss (PL) [47] methods.…”
Section: Comparisons Of Resultsmentioning
confidence: 99%
“…Sufficient training samples play an important role in supervised learning, however, the acquisition of available training samples in remote sensing image is difficult and time-consuming. Hence researchers focused on the semi-supervised learning, such as semi-supervised random forest [23], Transductive SVM (TSVM) [24], [25], semi-supervised multinomial logistic regression [26], ladder networks [27], self-training [28], co-training [29], active learning [30], [31], semi-supervised stacked autoencoders [32], generative adversarial networks (GAN) [33] and so on. Semisupervised learning method can improve the classification performance by combining the labeled samples and highreliable unlabeled samples.…”
Section: Introductionmentioning
confidence: 99%
“…The semi-supervised classification methods include TSVM [25], semi-supervised MLR (SemiMLR) [26], a semi-supervised SAE (SemiSAE) [27], and a ladder network [28].…”
Section: Experiments Setupmentioning
confidence: 99%
“…Semi-supervised learning is an effective tool that works by combining the limited labeled samples with the highly-reliable unlabeled samples. Semi-supervised learning has been widely applied in remote sensing image classification; for instance, see the semi-supervised random forest [23], semi-supervised deep fuzzy C-mean clustering [24], transudative SVM [25], semi-supervised multinomial logistic regression (MLR) [26], semi-supervised SAEs [27], and ladder networks [28]. In [29], Zheng et al proposed a geometric low-rank Laplacian regularized semi-supervised classifier to exploit the spatial and spectral structure of HSI data.…”
Section: Introductionmentioning
confidence: 99%