2019
DOI: 10.3390/rs11070864
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Feature-Learning for Hyperspectral Data with Autoencoders

Abstract: This paper proposes novel autoencoders for unsupervised feature-learning from hyperspectral data. Hyperspectral data typically have many dimensions and a significant amount of variability such that many data points are required to represent the distribution of the data. This poses challenges for higher-level algorithms which use the hyperspectral data (e.g., those that map the environment). Feature-learning mitigates this by projecting the data into a lower-dimensional space where the important information is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(17 citation statements)
references
References 39 publications
0
17
0
Order By: Relevance
“…Followed by the same procedure of data selection, in total 24000 background samples and 6750 barley samples are generated in the training and testing dataset with s=200. In addition, three conventional and four deep learning models are used for benchmarking, including PCA [27], Folded PCA (FPCA) [32], 1-D Singular Spectrum Analysis (1DSSA) [48], Deep convolutional neural network (CNN) [49], Stacked auto-encoder (SAE) [50], Deep recurrent neural network (RNN) [51], and Auto-CNN [52]. These stateof-the-art approaches are selected for two main reasons, i.e.…”
Section: ) Extended Experiments On Background Analysismentioning
confidence: 99%
“…Followed by the same procedure of data selection, in total 24000 background samples and 6750 barley samples are generated in the training and testing dataset with s=200. In addition, three conventional and four deep learning models are used for benchmarking, including PCA [27], Folded PCA (FPCA) [32], 1-D Singular Spectrum Analysis (1DSSA) [48], Deep convolutional neural network (CNN) [49], Stacked auto-encoder (SAE) [50], Deep recurrent neural network (RNN) [51], and Auto-CNN [52]. These stateof-the-art approaches are selected for two main reasons, i.e.…”
Section: ) Extended Experiments On Background Analysismentioning
confidence: 99%
“…Finally, the argument can be made that the RGB normalization layer might not be necessary when a scale invariant loss function is considered. Examples might be the spectral angular error, spectral information divergence [ 24 ] or spectral derivative based loss functions [ 25 ]. It should be noted that from a purely theoretical point of view there is no guarantee that training a network with a loss function that is invariant to changes in brightness will make the fully-trained network invariant to changes in brightness.…”
Section: Methodsmentioning
confidence: 99%
“…The high number of spectral channels associated with hyperspectral imagery presents a challenge to the training and performance of machine algorithms due to what is commonly known as the "curse of dimensionality" (Windrim et al 2019). The high number of spectral bands drastically increases the number of data points, which in turn can dramatically increase the computational requirements to train a given machine learning algorithm.…”
Section: Dimensionality Reductionmentioning
confidence: 99%