2022
DOI: 10.1109/tgrs.2022.3216319
|View full text |Cite
|
Sign up to set email alerts
|

Global–Local Transformer Network for HSI and LiDAR Data Joint Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(8 citation statements)
references
References 45 publications
0
8
0
Order By: Relevance
“…On the one hand, WMM [35], online multiview deep forest (OMDF) [47], and Kronecker product (KP) [38] are compared as non-DL fusion methods to demonstrate the superiority of neural networks in feature extraction. On the other hand, five DL-based fusion methods, including two-branch CNN (t-CNN) [48], feature intersecting learningbased CNN (FIL-CNN) [49], cross channel reconstruction network (CCR-Net) [50], global-local Transformer (GLT) [51], and multi-modal fusion network (MFNet) [52], are chosen for comparison. Specifically, t-CNN, FIL-NN, CCR-Net, and MFNet are all based on CNN for feature extraction, while GLT is based on CNN and Transformer for feature extraction.…”
Section: Performance Comparisonmentioning
confidence: 99%
“…On the one hand, WMM [35], online multiview deep forest (OMDF) [47], and Kronecker product (KP) [38] are compared as non-DL fusion methods to demonstrate the superiority of neural networks in feature extraction. On the other hand, five DL-based fusion methods, including two-branch CNN (t-CNN) [48], feature intersecting learningbased CNN (FIL-CNN) [49], cross channel reconstruction network (CCR-Net) [50], global-local Transformer (GLT) [51], and multi-modal fusion network (MFNet) [52], are chosen for comparison. Specifically, t-CNN, FIL-NN, CCR-Net, and MFNet are all based on CNN for feature extraction, while GLT is based on CNN and Transformer for feature extraction.…”
Section: Performance Comparisonmentioning
confidence: 99%
“…For example, Zhuo et al [52] simultaneously utilized multiscale CNN and multihop GCN to capture multiscale features containing local-global structural relationships. A novel global-local transformer network [53] learns local spatial features using multiscale aggregated CNN and extracts global spectral sequence properties using ViT. Taking global spatial context into account, [54] learns discriminative spatial features by overcoming the limitation of the receptive field and develops a dual-view spectral aggregation model to capture short-and long-view spectral features.…”
Section: B Global-local Feature Extraction Network For Rs Image Proce...mentioning
confidence: 99%
“…Therefore, inspired by the classification of HSI, researchers have applied the fusion model of CNN and transformer to the joint classification task of HSI and LiDAR-DSM. Ding et al [25] introduced the Global-Local Transformer Network (GLT-Net), designed to capture the global-local cor-relation features from inputs, effectively enhancing classification outcomes. This method only concatenated features from HSI and LiDAR-DSM without deep information fusion learning.…”
Section: Introductionmentioning
confidence: 99%