2021
DOI: 10.1002/ima.22639
|View full text |Cite
|
Sign up to set email alerts
|

Nonlocal convolutional block attention module VNet for gliomas automatic segmentation

Abstract: Glioma is the most common primary tumor in the skull, but it has no obvious boundary with normal brain tissue and is difficult to completely remove. Currently, manual segmentation of the lesion regions has been widely used in the clinical practice of magnetic resonance (MR) images of gliomas, but the implementation process has disadvantages such as time‐consuming and poor repeatability. It is because of the shortcomings of traditional segmentation methods that we must seek other efficient technical means, whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 47 publications
0
15
0
Order By: Relevance
“…Finally, we evaluate the AEMA‐Net model on the BraTS 2020 validation dataset, and then compare it with eight representative brain tumor segmentation models including Variational‐Autoencoder Regularized 3D MultiResUNet (Tang et al), ME‐Net (Zhang et al), NLCA‐VNet (Fang et al), TransBTS (Wang et al) etc 37,63‐69 . Among these models, Guan et al, 67 Fang et al, 68 Huang et al 69 and Wang et al 65 introduce SE, non‐local, CBAM and self‐attention modules to construct their deep segmentation networks, while the remaining four works are non‐attention models. As shown in Table 5, AEMA‐Net achieves the optimal DSC values of 0.896 and 0.839 on whole tumor and core tumor segmentation, and it ranks third on whole enhancing tumor segmentation results.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, we evaluate the AEMA‐Net model on the BraTS 2020 validation dataset, and then compare it with eight representative brain tumor segmentation models including Variational‐Autoencoder Regularized 3D MultiResUNet (Tang et al), ME‐Net (Zhang et al), NLCA‐VNet (Fang et al), TransBTS (Wang et al) etc 37,63‐69 . Among these models, Guan et al, 67 Fang et al, 68 Huang et al 69 and Wang et al 65 introduce SE, non‐local, CBAM and self‐attention modules to construct their deep segmentation networks, while the remaining four works are non‐attention models. As shown in Table 5, AEMA‐Net achieves the optimal DSC values of 0.896 and 0.839 on whole tumor and core tumor segmentation, and it ranks third on whole enhancing tumor segmentation results.…”
Section: Resultsmentioning
confidence: 99%
“…achieves the optimal DSC accuracy on enhancing tumor segmentation. Meanwhile, the results of AEMA-Net are slightly lower than those of Isensee et al, 55 Brügger et al 38 37,[63][64][65][66][67][68][69] Among these models, Guan et al, 67 Fang et al, 68 Huang et al 69 Additionally, as shown in Table 1-5, all deep neural models have the optimal DSC values on whole tumor segmentation while achieving the lowest results on the enhancing tumor segmentation. Because the whole tumor region is larger than regions of the core and enhancing tumors, it is easier to segment whole tumors for U-Net models.…”
Section: Comparison With Counterparts and State-of-the-art Methodsmentioning
confidence: 96%
“…, and thus the L RFL in Equation (7) can be viewed as a positive one L pRFL . the whole L RFL loss can be calculated as:…”
Section: Negative Region Term Of Rflmentioning
confidence: 99%
“…Feng et al 5 and Isensee et al 6 optimize the UNET model through different hyper-parameter selection strategy, which can reduce the random errors caused by manual setting of hyper-parameters. Many researchers 7,8 improve VNET 9 to segment brain gliomas, which use residual blocks at each stage and replace pooling with convolutions to improve the segmentation result. However, most of the current works focus on improving the network structure, and few of them on how to make the segmentation network focalize hard voxels to improve their performance.…”
Section: Introductionmentioning
confidence: 99%
“…40 Several research works have been performed efficiently on 41 the segmentation in B-mode echocardiography in the past 42 few decades [2]- [4]. With the combination of various feature 43 enhancement modules [5], [6] and different deep network 44 architectures [7]- [10], the ground-truth is applied as a class 45 associate or shape regulation by minimizing the loss function. 46 However, these methods still have scope for improvement.…”
Section: Introductionmentioning
confidence: 99%