2021
DOI: 10.1007/s11760-020-01828-8
|View full text |Cite
|
Sign up to set email alerts
|

Fully convolutional network with attention modules for semantic segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…In addition, we also achieve 69.5% mIoU on test set with 112 FPS. Compared with other models, including PSPNet [16] (412.2 GFOLPs and 250.8 M parameters), DeepLab [21] (457.8 GPOLPs and 262.1 M parameters), and FCN+PPAM+SAM [36] (38.7 GPOLPs and 42.41 M parameters), our model is considerably small.…”
Section: Qualitative Analysis Of Segmentation Resultsmentioning
confidence: 86%
“…In addition, we also achieve 69.5% mIoU on test set with 112 FPS. Compared with other models, including PSPNet [16] (412.2 GFOLPs and 250.8 M parameters), DeepLab [21] (457.8 GPOLPs and 262.1 M parameters), and FCN+PPAM+SAM [36] (38.7 GPOLPs and 42.41 M parameters), our model is considerably small.…”
Section: Qualitative Analysis Of Segmentation Resultsmentioning
confidence: 86%
“…In recent years, RCNU-Net, proposed by Boxiong Huang, Tongyuan Huang, et al precisely leverages this concept. [26] Its primary architecture is built upon the traditional U-Net framework, wherein the inclusion of CBAM in skip connections addresses the semantic gaps within multiscale polyp features, thereby enriching the content of RCNU-Net for polyp segmentation.…”
Section: Image Segmentation Networkmentioning
confidence: 99%
“…Recent transfer learning methods [11][12][13], in which the main adaptation method [5,14] improves generalization of unlabeled target data by aligning distribution. It has been applied in various applications, such as image classification [32], semantic segmentation [25,33], and object detection [4,5]. However, it cannot learn discriminant features in the target domain.…”
Section: Related Workmentioning
confidence: 99%