2009
DOI: 10.3390/s90604247
|View full text |Cite
|
Sign up to set email alerts
|

Extended Averaged Learning Subspace Method for Hyperspectral Data Classification

Abstract: Averaged learning subspace methods (ALSM) have the advantage of being easily implemented and appear to outperform in classification problems of hyperspectral images. However, there remain some open and challenging problems, which if addressed, could further improve their performance in terms of classification accuracy. We carried out experiments mainly by using two kinds of improved subspace methods (namely, dynamic and fixed subspace methods), in conjunction with the [0,1] and [-1,+1] normalization methods. W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…Furthermore, we are now trying to develop a more efficient approach to the eigenvector selection based on the subspace method (Bagan et al. ).…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, we are now trying to develop a more efficient approach to the eigenvector selection based on the subspace method (Bagan et al. ).…”
Section: Discussionmentioning
confidence: 99%
“…Also, applying the LASSO based eigenvector selection approach to the other models such as generalized linear model (GLM) is an important remains researches. Furthermore, we are now trying to develop a more efficient approach for the eigenvector selection based on subspace method (Bagan et al 2009).…”
Section: Discussionmentioning
confidence: 99%
“…In order to make the feature values in the same order of magnitude and improve the convergence speed of the model, we normalize the extracted features ( Bagan et al, 2009 ). The five feature values are mapped to [−1, 1] and the normalized feature values are taken as input variables of the BPNN model in this paper.…”
Section: The Process Of the Proposed Algorithmmentioning
confidence: 99%