2022
DOI: 10.1109/access.2022.3205602
|View full text |Cite
|
Sign up to set email alerts
|

EMCA: Efficient Multiscale Channel Attention Module

Abstract: Attention mechanisms have been explored with CNNs across the spatial and channel dimensions. However, all the existing methods devote the attention modules to capture local interactions from a uni-scale. This paper tackles the following question: can one consolidate multi-scale aggregation while learning channel attention more efficiently? To this end, we avail channel-wise attention over multiple feature scales, which empirically shows its aptitude to replace the limited local and uni-scale attention modules.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 78 publications
0
1
0
Order By: Relevance
“…To verify the effectiveness of the proposed model, the detection results of the proposed model on MS COCO dataset are compared with four multi-stage methods, including Faster R-CNN [15], Mask RCNN [47], Libra RCNN [48], AutoDet [49], six one-stage methods, including YOLOv3 [50], SSD [29], RefineDet [51], RetinaNet [18], GHM [42], EMCA [53] and five anchor-free methods, including CornerNet [19], FCOS [21], ReFPN-FCOS [54], Pseudo-IoU [54], ObjectBox [56] methods, and the comparison results are shown in Table 1.…”
Section: B Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
“…To verify the effectiveness of the proposed model, the detection results of the proposed model on MS COCO dataset are compared with four multi-stage methods, including Faster R-CNN [15], Mask RCNN [47], Libra RCNN [48], AutoDet [49], six one-stage methods, including YOLOv3 [50], SSD [29], RefineDet [51], RetinaNet [18], GHM [42], EMCA [53] and five anchor-free methods, including CornerNet [19], FCOS [21], ReFPN-FCOS [54], Pseudo-IoU [54], ObjectBox [56] methods, and the comparison results are shown in Table 1.…”
Section: B Comparisons With State-of-the-art Methodsmentioning
confidence: 99%