2024
DOI: 10.1109/tgrs.2024.3374324
|View full text |Cite
|
Sign up to set email alerts
|

Cross Hyperspectral and LiDAR Attention Transformer: An Extended Self-Attention for Land Use and Land Cover Classification

Swalpa Kumar Roy,
Atri Sukul,
Ali Jamali
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…To this end, a popular technique called the attention mechanism in the fields of neural machine translation [136][137][138] and computer vision [139][140][141][142] was introduced to capture salient spectral bands and relevant spatial areas of HSI cubes [143,144]. Many effective attention modules, such as self-attention (SA) modules [145], squeeze-and-excitation (SE) modules [146], convolutional block attention modules (CBAMs) [147], non-local modules [148], etc., have been proposed to enhance the discrimination of features.…”
Section: Introductionmentioning
confidence: 99%
“…To this end, a popular technique called the attention mechanism in the fields of neural machine translation [136][137][138] and computer vision [139][140][141][142] was introduced to capture salient spectral bands and relevant spatial areas of HSI cubes [143,144]. Many effective attention modules, such as self-attention (SA) modules [145], squeeze-and-excitation (SE) modules [146], convolutional block attention modules (CBAMs) [147], non-local modules [148], etc., have been proposed to enhance the discrimination of features.…”
Section: Introductionmentioning
confidence: 99%