2022
DOI: 10.3390/math10101750
|View full text |Cite
|
Sign up to set email alerts
|

Spatial Channel Attention for Deep Convolutional Neural Networks

Abstract: Recently, the attention mechanism combining spatial and channel information has been widely used in various deep convolutional neural networks (CNNs), proving its great potential in improving model performance. However, this usually uses 2D global pooling operations to compress spatial information or scaling methods to reduce the computational overhead in channel attention. These methods will result in severe information loss. Therefore, we propose a Spatial channel attention mechanism that captures cross-dime… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(12 citation statements)
references
References 16 publications
0
12
0
Order By: Relevance
“…To evaluate the performance of the proposed MS-DAM, we compared and analyzed the classification results of the model with those of existing self-attention methods. The six existing self-attention methods include SENet, bottleneck attention module (BAM) 36 , CBAM, efficient channel attention for deep convolutional neural networks (ECA-Net) 37 , attention gated networks (AGUNet) 38 , and MS-CAM. Our method, MS-DAM, can be used as an independent modules module alongside these methods.…”
Section: Resultsmentioning
confidence: 99%
“…To evaluate the performance of the proposed MS-DAM, we compared and analyzed the classification results of the model with those of existing self-attention methods. The six existing self-attention methods include SENet, bottleneck attention module (BAM) 36 , CBAM, efficient channel attention for deep convolutional neural networks (ECA-Net) 37 , attention gated networks (AGUNet) 38 , and MS-CAM. Our method, MS-DAM, can be used as an independent modules module alongside these methods.…”
Section: Resultsmentioning
confidence: 99%
“…The primary function of spatial attention is to capture the interdependencies between different regions of an image by emphasizing the important regions and suppressing the less relevant ones. This is achieved through the use of a gating mechanism that generates a spatial map, which is multiplied by the input features to amplify or attenuate them [ 43 ]. This gating mechanism is typically implemented using a learnable parameter that is trained alongside the rest of the model.…”
Section: Methodsmentioning
confidence: 99%
“…In the case of violence detection, useful information is the nature of interaction dynamics between people that we want to classify and localize in an image. Hereby, the Convolutional Block Attention Module (CBAM) is proposed in [22] and its concept used in [25][26][27]; that consist of two main sub-modules (as shown in Figure 1): 1) Channel attention module, and 2) Spatial attention module. As illustrated in Figure 1, the input features input to the channel attention, in shape of F ∈ C×H×W.…”
Section: Convolutional Block Attention Modulementioning
confidence: 99%