2022
DOI: 10.1109/jstars.2022.3198517
|View full text |Cite
|
Sign up to set email alerts
|

A CBAM Based Multiscale Transformer Fusion Approach for Remote Sensing Image Change Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 89 publications
(34 citation statements)
references
References 39 publications
0
34
0
Order By: Relevance
“…STransUNet [66] combined transformer and UNet architectures, which can capture shallow detail features and model global context in high-level features. In order to capture the spatial and channel information of feature maps, MTCNet [67] divides CBAM into a SAM and a CAM, which are applied to the front-end and back-end of the multi-scale transformer, respectively. However, these methods do not consider the fusion of multi-scale tokens when using transformers to model the long-range context information of images.…”
Section: B Transformer-based Methodsmentioning
confidence: 99%
“…STransUNet [66] combined transformer and UNet architectures, which can capture shallow detail features and model global context in high-level features. In order to capture the spatial and channel information of feature maps, MTCNet [67] divides CBAM into a SAM and a CAM, which are applied to the front-end and back-end of the multi-scale transformer, respectively. However, these methods do not consider the fusion of multi-scale tokens when using transformers to model the long-range context information of images.…”
Section: B Transformer-based Methodsmentioning
confidence: 99%
“…Convolution Block Attention Module (CBAM) [30] is a lightweight and versatile attention module with structural diagram in the lower left corner of the Fig. 4.…”
Section: B Attention Mechanismmentioning
confidence: 99%
“…The CBAM attention mechanism is a typical hybrid attention mechanism that sequentially applies channel attention mechanism (CAM) and spatial attention mechanism (SAM) modules. Compared to using channel attention or spatial attention independently, CBAM can achieve better results [26]. As illustrated in Figure 10, the CBAM attention mechanism takes a given intermediate feature map F ∈ R C×H×W as input.…”
Section: Soft-pooling and Multi-scale Convolution Cbammentioning
confidence: 99%