2021
DOI: 10.1109/tmm.2021.3068576
|View full text |Cite
|
Sign up to set email alerts
|

Spatial Pyramid Attention for Deep Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(9 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…The SPA is originally proposed in the previous work (Ma et al 2021). It adopts a spatial pyramid structure to characterize three-scale attention weights.…”
Section: Cross-level Feature Fusion Modulementioning
confidence: 99%
See 1 more Smart Citation
“…The SPA is originally proposed in the previous work (Ma et al 2021). It adopts a spatial pyramid structure to characterize three-scale attention weights.…”
Section: Cross-level Feature Fusion Modulementioning
confidence: 99%
“…In this study, we extend the SPA structure by aggregate the feature context at four different scales for better capturing fine, local, coarse, and global contextual information in CT images, as illustrated in figure 3(b). For conserving space, please refer to Ma et al (2021) for detail information of the SPA structure.…”
Section: Cross-level Feature Fusion Modulementioning
confidence: 99%
“…The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) information and are capable of retaining the spatial characteristics within each channel (Ma et al 2021). As shown in Figure 4, we use multiple max-pooling layers to eliminate redundant spatial information in the feature maps.…”
Section: Cyclic Disentanglement Modulementioning
confidence: 99%
“…Fully convolutional network (FCN) was effectively used for semantic segmentation [1] and, consequently, many deep learning-based methods were proposed. They can be mainly divided into three categories: gradually recover resolution-based methods [4], multi-scale pyramid-based methods [19][20][21] and attention module-based methods [22][23][24]. The U-shaped network was proposed in Ref.…”
Section: Introductionmentioning
confidence: 99%