2017
DOI: 10.1109/tgrs.2017.2651639
|View full text |Cite
|
Sign up to set email alerts
|

Self-Taught Feature Learning for Hyperspectral Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
91
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 110 publications
(92 citation statements)
references
References 49 publications
0
91
1
Order By: Relevance
“…In recent years, transfer learning has been successfully applied in many fields, especially in the remote sensing field [114], [115] where the available training samples are not sufficient. In [77], [104], [105], [107], [108], transfer learning was employed to set the initial values of network parameters copied from other trained deep networks, which provides better classification performance compared with the random initialization-based methods. Fig.…”
Section: B Transfer Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, transfer learning has been successfully applied in many fields, especially in the remote sensing field [114], [115] where the available training samples are not sufficient. In [77], [104], [105], [107], [108], transfer learning was employed to set the initial values of network parameters copied from other trained deep networks, which provides better classification performance compared with the random initialization-based methods. Fig.…”
Section: B Transfer Learningmentioning
confidence: 99%
“…For example, research works [35], [37], [53], [54], [73], [104], [111], [116] utilized a fully connected network to classify HSIs, where the training of the network was divided into pre-training in an unsupervised manner and fine-tuning with labeled data. In the pre-training stage, the sample from unlabeled data set was first mapped into an intermediate feature.…”
Section: Unsupervised/semi-supervised Feature Learningmentioning
confidence: 99%
“…There has been interest in using CNNs for analyzing remote sensing imagery [3], [4], [5], [6], [7]. Uzkent et al adapted correlation filters trained on RGB images with HSI bands to successfully track cars in moving platform scenarios [3].…”
Section: Introductionmentioning
confidence: 99%
“…Kleynhans et al compared the performance of predicting top-of-atmosphere thermal radiance by using forward modeling with radiosonde data and radiative transfer modeling (MODTRAN [8]) against a multi-layer perception (MLP) and CNN and observed better performance from MLP and CNN in all experimental cases [5]. Kemker et al used multi-scale independent component analysis and stacked convolutional autoencoders as self-supervised learning tasks before performing semantic segmentation on multispectral and hyperspectral imagery [6], [7].…”
Section: Introductionmentioning
confidence: 99%
“…Unsupervised models (e.g., auto-encoders), are trained to extract features from large unlabeled data sets. By restricting the encoder-decoder structure, one can adjust the model to achieve the results of the required task [6] [7] [8]. Supervised models (e.g., convolutional neural networks (CNNs) and deep belief networks) are trained using ground truth (GT) information as expected output of the network.…”
Section: Introductionmentioning
confidence: 99%