2011
DOI: 10.1504/ijscc.2011.042430
|View full text |Cite
|
Sign up to set email alerts
|

Reducing dimensionality of hyperspectral data with diffusion maps and clustering with k-means and Fuzzy ART

Abstract: It is very difficult to analyze large amounts of hyperspectral data. Here we present a method based on reducing the dimensionality of the data and clustering the result in moving toward classification of the data. Dimensionality reduction is done with diffusion maps, which interpret the eigenfunctions of Markov matrices as a system of coordinates on the original dataset in order to obtain an efficient representation of data geometric descriptions. Clustering is done using k-means and a neural network clusterin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…To demonstrate the performances of the proposed CTNGPA, we consider the following experiments: the size of window is 5 × 5 and Table III gives the number of samples for different surface objects in ROSIS University and AVIRIS Salinas datasets; 10% of hyperspectral data are selected from different surface objects at random as training samples, with the remaining 90% as testing samples; the comparative DR algorithms include four vector-based methods, i.e., DM [10], LDA [24], nonparametric weighted feature extraction (NWFE) [34], and Laplacian eigenmaps (LE) [35], and three tensor-based methods, i.e., MDA [23], MPCA [21], and AMD+PCA [25]; 1-NN and SVM are used to further classify the dimension-reduced hyperspectral data.…”
Section: Comparative Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…To demonstrate the performances of the proposed CTNGPA, we consider the following experiments: the size of window is 5 × 5 and Table III gives the number of samples for different surface objects in ROSIS University and AVIRIS Salinas datasets; 10% of hyperspectral data are selected from different surface objects at random as training samples, with the remaining 90% as testing samples; the comparative DR algorithms include four vector-based methods, i.e., DM [10], LDA [24], nonparametric weighted feature extraction (NWFE) [34], and Laplacian eigenmaps (LE) [35], and three tensor-based methods, i.e., MDA [23], MPCA [21], and AMD+PCA [25]; 1-NN and SVM are used to further classify the dimension-reduced hyperspectral data.…”
Section: Comparative Experimentsmentioning
confidence: 99%
“…With the hyperspectral data as an example, vector-based DR algorithms need to first transform the hyperspectral image into a 1-D vector, by only using the spectral information of each pixel [8], [9], before the DR analysis is conducted. For instance, methods in [10]- [12] rely on vector-based reduction using diffusion maps (DMs). The concise and informative representation of hyperspectral images can be achieved via the introduced diffusion geometric coordinates derived from nonlinear dimension reduction maps-DMs.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Linear methods tend to capture an optimal subspace at a global level, and often fail to preserve the local features. Manifold learning and kernel method techniques, including multiple versions of Diffusion Maps, LLE and ISOMAP (see for example [11]- [16]), have been used to improve the capability of feature extraction in HSI. These techniques try to capture local structure information, but often require a high number of operations (typically quadratic in the number September 27, 2010 DRAFT of data samples), and suffer from other difficulties like the "out-of-sample" issue [17].…”
Section: A Class-specific Reconstructionmentioning
confidence: 99%
“…Since the atoms will not be completely orthogonal (e.g., due to the overcompletness of Ψ), we penalize using (11). To update each of the dictionary atoms ψ j i , we use the following update rule:…”
Section: Unsupervised Mapping With a Block-incoherent Dictionarymentioning
confidence: 99%