2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2022
DOI: 10.1109/bibm55620.2022.9995040
|View full text |Cite
|
Sign up to set email alerts
|

MALUNet: A Multi-Attention and Light-weight UNet for Skin Lesion Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 72 publications
(20 citation statements)
references
References 25 publications
0
20
0
Order By: Relevance
“…Compared with most of the most advanced methods, better segmentation results are obtained. Methods Dice Jaccard Accuracy U-Net [ 1 2 ] 77.03 67.15 90.52 SwinUNet [ 1 9 ] 77.29 66.51 91.21 UCTransNet [ 2 0 ] 78.90 69.35 91.29 TransUNet [ 1 8 ] 83.58 74.33 92.84 SkinNet [2 7 ] 85.50 76.70 93.20 DCL-PSI [ 2 8 ] 85.60 77.70 94.00 PA-Net [ 2 9 ] 85.80 77.60 93.60 iMSCGnet [ 3 0 ] 85.83 77.75 93.58 FrCN [ 3 1 ] 87.08 77.11 94.03 UTNetV2 [ 3 2 ] 87.23 77.35 95.84 UNeXt-S [ 3 3 ] 87.80 78.26 95.95 MALUNet [ 3 4 ] 88.13 78.78 96.18 TransFuse [ 3 5 ] 88 [12] 83.62 75.17 91.75 UCTransNet [20] 86.41 78.60 93.13 SwinUNet [19] 86.02 78.21 92.96 UNet++ [36] 87.83 78.31 94.02 Attention-UNet [37] 87.91 78.43 94.13 UTNetV2 [32] 88.25 78.97 94.32 MSRF-Net [38] 88.13 ----TransUNet [18] 88.32 81.25 93.67 UNeXt-S [33] 88.33 79.09 94.39 SANet [39] 88.59 79.52 94.39 TransFuse [35] 89.27 80.63 94.66 MALUNet [34] 89.04 80.25 94.62 MCGU-Net [40] 89.50 --95.50 DoubleU-Net [41] 89.62 ----FAuNet 90.42 84.23 94.93…”
Section: Compared With the Representative Methodsmentioning
confidence: 99%
“…Compared with most of the most advanced methods, better segmentation results are obtained. Methods Dice Jaccard Accuracy U-Net [ 1 2 ] 77.03 67.15 90.52 SwinUNet [ 1 9 ] 77.29 66.51 91.21 UCTransNet [ 2 0 ] 78.90 69.35 91.29 TransUNet [ 1 8 ] 83.58 74.33 92.84 SkinNet [2 7 ] 85.50 76.70 93.20 DCL-PSI [ 2 8 ] 85.60 77.70 94.00 PA-Net [ 2 9 ] 85.80 77.60 93.60 iMSCGnet [ 3 0 ] 85.83 77.75 93.58 FrCN [ 3 1 ] 87.08 77.11 94.03 UTNetV2 [ 3 2 ] 87.23 77.35 95.84 UNeXt-S [ 3 3 ] 87.80 78.26 95.95 MALUNet [ 3 4 ] 88.13 78.78 96.18 TransFuse [ 3 5 ] 88 [12] 83.62 75.17 91.75 UCTransNet [20] 86.41 78.60 93.13 SwinUNet [19] 86.02 78.21 92.96 UNet++ [36] 87.83 78.31 94.02 Attention-UNet [37] 87.91 78.43 94.13 UTNetV2 [32] 88.25 78.97 94.32 MSRF-Net [38] 88.13 ----TransUNet [18] 88.32 81.25 93.67 UNeXt-S [33] 88.33 79.09 94.39 SANet [39] 88.59 79.52 94.39 TransFuse [35] 89.27 80.63 94.66 MALUNet [34] 89.04 80.25 94.62 MCGU-Net [40] 89.50 --95.50 DoubleU-Net [41] 89.62 ----FAuNet 90.42 84.23 94.93…”
Section: Compared With the Representative Methodsmentioning
confidence: 99%
“…The mask segmentor identifier ( SI ) ( Figure 3c ) takes the output from the FiLM decoder as input and generates predicted segmentation mask 0,1 , where is the number of categories (RV, LV, LV-Myo, and background) in the training dataset. We exploit a novel supervised loss, weighted soft background focal (WSBF) loss, for the base model, which is a combination of background focal dice loss (BFD) and weighted soft focal loss (WSFL): where and are designed to account for class imbalance and are treated as hyperparameters, the term is used to down-weigh examples with backgrounds, where varies in the range [ 1 , 3 ]. The term 1 1 denotes the cross-entropy loss.…”
Section: Methodsmentioning
confidence: 99%
“…The emerging success of deep convolutional neural networks (CNNs) has rendered them the de facto model in solving high-level computer vision tasks [ 1 3 ]. However, such approaches mostly rely on large amounts of annotated data for training, the acquisition of which is expensive and laborious, especially for medical imaging/diagnostic radiology data.…”
Section: Introductionmentioning
confidence: 99%
“…A straightforward method with a small number of parameters and low complexity was proposed by Ruan et al. [38]. In order to improve model performance while significantly reducing model parameters and computational complexity, the model combines four unique attention modules with a UNet architecture.…”
Section: Related Workmentioning
confidence: 99%