2022
DOI: 10.3389/fonc.2021.816672
|View full text |Cite
|
Sign up to set email alerts
|

AttR2U-Net: A Fully Automated Model for MRI Nasopharyngeal Carcinoma Segmentation Based on Spatial Attention and Residual Recurrent Convolution

Abstract: Radiotherapy is an essential method for treating nasopharyngeal carcinoma (NPC), and the segmentation of NPC is a crucial process affecting the treatment. However, manual segmentation of NPC is inefficient. Besides, the segmentation results of different doctors might vary considerably. To improve the efficiency and the consistency of NPC segmentation, we propose a novel AttR2U-Net model which automatically and accurately segments nasopharyngeal carcinoma from MRI images. This model is based on the classic U-Ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…(4) Flow and timing (sufficient time between index and reference, all data points included in analysis) [17], 3D Res-UNet [17,27,28], modified 3D Res-UNet [14,17], a mix of 2D and 3D Res-UNet [29,30], 3D VNet [15], 3D SI-UNet [18], 3D Nested UNet [14,19,31], 3D AttR2-UNet [14,21,31], 3D LW-UNet [32], and 3D DE-UNet [33]. Magnetic resonance imagining modality studies used hospital data [21-24, 26, 28, 29, 31-35], and CT studies often used the 2019 MICCAI StructSeg data [14,15,17,19] and hospital data [16,18,27,30].…”
Section: Study Characteristics and Quality Assessmentmentioning
confidence: 99%
“…(4) Flow and timing (sufficient time between index and reference, all data points included in analysis) [17], 3D Res-UNet [17,27,28], modified 3D Res-UNet [14,17], a mix of 2D and 3D Res-UNet [29,30], 3D VNet [15], 3D SI-UNet [18], 3D Nested UNet [14,19,31], 3D AttR2-UNet [14,21,31], 3D LW-UNet [32], and 3D DE-UNet [33]. Magnetic resonance imagining modality studies used hospital data [21-24, 26, 28, 29, 31-35], and CT studies often used the 2019 MICCAI StructSeg data [14,15,17,19] and hospital data [16,18,27,30].…”
Section: Study Characteristics and Quality Assessmentmentioning
confidence: 99%
“…Dense U‐Net densely connects convolutional layers in blocks, 48 ResU‐Net includes residual connections, 49 Retina U‐Net is a two‐stage network, RU‐Net includes recurrent connections, R2U‐Net adds residual recurrent connections 50 . Attention modules have also been added at the skip connections 51,52 . Both V‐Net 53 and nnUNet 54 were designed with 3D convolutional layers with nnUNet additionally automating preprocessing and learning parameter optimization.…”
Section: Image Segmentationmentioning
confidence: 99%
“… 50 Attention modules have also been added at the skip connections. 51 , 52 Both V‐Net 53 and nnUNet 54 were designed with 3D convolutional layers with nnUNet additionally automating preprocessing and learning parameter optimization. Pix2pix uses U‐Net as the generator with a convolutional discriminator (PatchGAN).…”
Section: Image Segmentationmentioning
confidence: 99%
“…After that, many segmentation algorithms for medical images were adapted from U-Net. Some scholars combined mechanisms such as attention mechanism and residual connectivity with U-Net to improve segmentation performance and segment the nasopharyngeal carcinoma [40][41][42]. In order to accommodate the volume segmentation of medical images, many U-Net-based 3D models have been developed as well [43,44].…”
Section: Fully-supervisedmentioning
confidence: 99%