2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9891928
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Self-Attention Network for Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…The performance of the FCB-SwinV2 Transformer for the Kvasir-SEG [11] dataset is reported in Table 12 for a random data partition. Comparisons to FCN-Transformer model performance from the original paper [7] and from [42] with the advanced data augmentation technique named 'Spatially Exclusive Pasting' (SEP) are provided. Note that Results for the Meta-Polyp model reported in [48] (95.90mDice and 92.10mIoU) are not included in Table 12.…”
Section: Random Data Partitioning Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The performance of the FCB-SwinV2 Transformer for the Kvasir-SEG [11] dataset is reported in Table 12 for a random data partition. Comparisons to FCN-Transformer model performance from the original paper [7] and from [42] with the advanced data augmentation technique named 'Spatially Exclusive Pasting' (SEP) are provided. Note that Results for the Meta-Polyp model reported in [48] (95.90mDice and 92.10mIoU) are not included in Table 12.…”
Section: Random Data Partitioning Resultsmentioning
confidence: 99%
“…However, for the relatively small image datasets used to evaluate colonoscopy segmentation models, minor differences in how the data is partitioned could cause noticeable performance changes. Recently this has been demonstrated for models being evaluated on the Kvasir-SEG dataset where performance changes greater than 1% were shown to occur for different data partitions [42]. This is significant because the current highest performing models are now typically separated by less than 1% performance differences.…”
Section: A Dataset Selection and Partitioningmentioning
confidence: 95%
“…A preliminary conference version [1] of this paper is presented in IJCNN 2022. We extend our previous study with three major improvements.…”
Section: Introductionmentioning
confidence: 99%
“…Fig.3: Architectures of standard self-attention module, disentangled non-local module[39], LSA module[1], and LSA++ module.…”
mentioning
confidence: 99%