2017
DOI: 10.1109/tim.2017.2664480
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral Image Classification via Multiple-Feature-Based Adaptive Sparse Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
58
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 159 publications
(58 citation statements)
references
References 47 publications
0
58
0
Order By: Relevance
“…In [24], edge-preserving filtering was considered as a postprocessing technique to optimize the probabilistic results of an SVM. In [25]- [27], the spatial information within a neighboring region was incorporated into a sparse representation model. These sparse representation methods are based on the observation that hyperspectral pixels can usually be represented by a linear combination of a few common pixels from the same class.…”
Section: Introductionmentioning
confidence: 99%
“…In [24], edge-preserving filtering was considered as a postprocessing technique to optimize the probabilistic results of an SVM. In [25]- [27], the spatial information within a neighboring region was incorporated into a sparse representation model. These sparse representation methods are based on the observation that hyperspectral pixels can usually be represented by a linear combination of a few common pixels from the same class.…”
Section: Introductionmentioning
confidence: 99%
“…proposed via the integration of contextual information from the neighborhood pixels. Better performance and promising results are shown by the proposed framework even when the quantity of initially labeled samples are trivial [62].…”
Section: E Small Number Of Labelled Samplesmentioning
confidence: 99%
“…ISOMAP characterizes the data distribution by geodesic distances instead of Euclidean distances, it seeks a lower-dimensional embedding which maintains geodesic distances between all points [19]. However, due to their nonlinear characteristic, they suffer from the problem of out-of-sample and cannot process the unknown samples [20][21][22]. To address this issue, many linear manifold learning methods were designed to obtain explicit feature mappings, which can map unknown samples into low-dimensional space [23,24].…”
Section: Introductionmentioning
confidence: 99%