2022 12th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS) 2022
DOI: 10.1109/whispers56178.2022.9955116
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral Image Classification Based on Multi-Level Spectral-Spatial Transformer Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…The Transformer model is also used in the image field. It can help improve the performance of image classification [26,27], contribute to generating more realistic pictures [28], assist in object detection [29], and play a critical role in semantic segmentation [30].…”
Section: Transformer Modulementioning
confidence: 99%
“…The Transformer model is also used in the image field. It can help improve the performance of image classification [26,27], contribute to generating more realistic pictures [28], assist in object detection [29], and play a critical role in semantic segmentation [30].…”
Section: Transformer Modulementioning
confidence: 99%
“…The utilization of deep learning methodologies has yielded commendable results, exemplified by Contextual-CNN [5], DBN [6], and HybridSN [7]. The integration of transformer models into the realm of image processing has introduced remarkable advancements in hyperspectral image classification [8], delivering heightened precision as demonstrated by SST [9] and SpectralFormer [10], both of which leverage the transformer mechanism. Nonetheless, due to inherent limitations within the data itself, hyperspectral images often confront challenges such as homospectral foreign objects and anomalies, coupled with susceptibility to environmental factors during collection, subsequently constraining their practical applicability [11].…”
Section: Introductionmentioning
confidence: 99%
“…Transformer [15] has a strong ability to capture long-term and short-term information, and has achieved brilliant achievements in computer vision tasks, including image classification [16][17][18], image deraining [19], object detection [20,21], low-level image processing [22,23], action recognition [24][25][26], and other fields. However, these methods tend to require a large amount of GPU memory, thereby hindering the further advancement of Transformer.…”
Section: Introductionmentioning
confidence: 99%