2020
DOI: 10.1109/access.2020.3025372
|View full text |Cite
|
Sign up to set email alerts
|

MA-Net: A Multi-Scale Attention Network for Liver and Tumor Segmentation

Abstract: Automatic assessing the location and extent of liver and liver tumor is critical for radiologists, diagnosis and the clinical process. In recent years, a large number of variants of U-Net based on Multi-scale feature fusion are proposed to improve the segmentation performance for medical image segmentation. Unlike the previous works which extract the context information of medical image via applying the multi-scale feature fusion, we propose a novel network named Multi-scale Attention Net (MA-Net) by introduci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
144
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 288 publications
(146 citation statements)
references
References 34 publications
0
144
0
2
Order By: Relevance
“…We quantified and compared segmentation performance of our model based on Average Precision (AP) on specific IOU thresholds of 0.3, 0.5, and 0.7 as defined in the COCO ( Lin et al, 2014 ) and PASCAL VOC ( Everingham et al, 2015 ) challenges. In comparisons with U-Net ( Ronneberger et al, 2015 ) [implemented using ( Buda et al, 2019 )’s official implementation], Fully Convolutional Networks (FCN) ( Long et al, 2015 ), and Multi-Scale Attention Network (MANET) ( Fan et al, 2020 ) [implemented using ( Yakubovskiy, 2020 )] ( Figure 5 ), we found LatentCADx demonstrated the best overall AP score. The performance of AP = 0.75 was significantly better the second best model U-Net (AP = 0.62) on IOU = 0.5 threshold ( Table 1 ).…”
Section: Resultsmentioning
confidence: 99%
“…We quantified and compared segmentation performance of our model based on Average Precision (AP) on specific IOU thresholds of 0.3, 0.5, and 0.7 as defined in the COCO ( Lin et al, 2014 ) and PASCAL VOC ( Everingham et al, 2015 ) challenges. In comparisons with U-Net ( Ronneberger et al, 2015 ) [implemented using ( Buda et al, 2019 )’s official implementation], Fully Convolutional Networks (FCN) ( Long et al, 2015 ), and Multi-Scale Attention Network (MANET) ( Fan et al, 2020 ) [implemented using ( Yakubovskiy, 2020 )] ( Figure 5 ), we found LatentCADx demonstrated the best overall AP score. The performance of AP = 0.75 was significantly better the second best model U-Net (AP = 0.62) on IOU = 0.5 threshold ( Table 1 ).…”
Section: Resultsmentioning
confidence: 99%
“…To validate the superiority of the proposed QAU-Net, six state-of-the-art networks used for liver and liver-tumor segmentation are considered as comparative approaches. These networks can be grouped into two categories: 2D networks, 3D networks, where 2D networks include U-Net [3], U-Net++ [10], CE-Net [8] and MA-Net [14], 3D networks include 3D U-Net [15] and V-Net [16]. Table 2 and Table 3 present the segmentation performance on the test set using U-Net [3], CE-Net [8], MA-Net [14], 3D U-Net [15], V-Net [16], and proposed QAU-Net.…”
Section: Experimental Comparison On Test Datasetsmentioning
confidence: 99%
“…These networks can be grouped into two categories: 2D networks, 3D networks, where 2D networks include U-Net [3], U-Net++ [10], CE-Net [8] and MA-Net [14], 3D networks include 3D U-Net [15] and V-Net [16]. Table 2 and Table 3 present the segmentation performance on the test set using U-Net [3], CE-Net [8], MA-Net [14], 3D U-Net [15], V-Net [16], and proposed QAU-Net. It is clear that our QAU-Net achieves mean DICE of 96.13% and 85.90%, mean VOE of 8.52% and 24.13%, mean RVD of 1.85% and 0.82%, mean ASSD of 2.03 mm and 18.73 mm, and mean RMSD of 52.60 mm and 63.12 mm for liver and liver-tumor segmentation, respectively.…”
Section: Experimental Comparison On Test Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the quantitative study, we used ACC, SEN, and SPE for pixel-level and DSC and IoU for object-level performance, respectively, as shown in Table 3. We compared the AWEU-Net to six different lung nodule segmentation models considering both datasets: PSPNet [40], MANet [41], PAN [42], FPN [43], DeeplabV3 [44], and U-Net [21,22]. As shown in Table 3, the integration of both PAWE and CAWE with the U-Net outperformed the segmentation results of the baseline model (U-Net).…”
Section: Nodule Segmentationmentioning
confidence: 99%