2022
DOI: 10.1109/tgrs.2022.3207933
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral Image Classification Using Group-Aware Hierarchical Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 141 publications
(28 citation statements)
references
References 46 publications
0
12
0
Order By: Relevance
“…CNN-based networks include 1D CNN [14], 2D CNN [60], 3D CNN [61], RNN [21], SSRN [27], HybridSN [31], and RIAN [13]. Transformer-based methods contain SF [45], SSFTT [46], and GAHT [47].…”
Section: Compared Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…CNN-based networks include 1D CNN [14], 2D CNN [60], 3D CNN [61], RNN [21], SSRN [27], HybridSN [31], and RIAN [13]. Transformer-based methods contain SF [45], SSFTT [46], and GAHT [47].…”
Section: Compared Methodsmentioning
confidence: 99%
“…• 1D CNN [14]: This method uses two 1D convolutional layers to extract features. GAHT [47]: This work utilizes the hierarchical transformer network with the grouped pixel embedding module. This module confines the multi-head self-attention for extracting the spatial-spectral feature.…”
Section: Compared Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Zhang proposed a convolution transformer mixer (CTMixer) network combining the advantages of a vision transformer and convolutional neural network, which also use the MHSA mechanism to improve classification accuracy [59]. Mei built a multi-head selfattention that encodes the semantic context-aware representation to obtain discriminative features [60]. He proposed the HSI-BERT model, which has a generalization ability using the MHSA layer.…”
Section: Introductionmentioning
confidence: 99%