2018
DOI: 10.1101/392969
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AnatomyNet: Deep 3D Squeeze-and-excitation U-Nets for fast and fully automated whole-volume anatomical segmentation

Abstract: Purpose: Radiation therapy (RT) is a common treatment for head and neck (HaN) cancer where therapists are often required to manually delineate boundaries of the organs-at-risks (OARs). The radiation therapy planning is time-consuming as each computed tomography (CT) volumetric data set typically consists of hundreds to thousands of slices and needs to be individually inspected. Automated head and neck anatomical segmentation provides a way to speed up and improve the reproducibility of radiation therapy planni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(37 citation statements)
references
References 44 publications
0
36
0
1
Order By: Relevance
“…Recent advances in semantic segmentation (Long et al 2015 (Ibragimov & Xing 2017) presented the first attempt of using deep learning concept of CNN to segment organs-at-risks in head and neck CT scans. The AnatomyNet (Zhu, Huang, Tang, Qian, Du, Fan & Xie 2018) is built upon the popular 3D U-net architecture using residual blocks in encoding layers and a new loss function combining Dice score and focal loss in the training process. A fully CNN (FCNN) method was presented by (Tong, Gou, Yang, Ruan & Sheng 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Recent advances in semantic segmentation (Long et al 2015 (Ibragimov & Xing 2017) presented the first attempt of using deep learning concept of CNN to segment organs-at-risks in head and neck CT scans. The AnatomyNet (Zhu, Huang, Tang, Qian, Du, Fan & Xie 2018) is built upon the popular 3D U-net architecture using residual blocks in encoding layers and a new loss function combining Dice score and focal loss in the training process. A fully CNN (FCNN) method was presented by (Tong, Gou, Yang, Ruan & Sheng 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Our backbone S-Net achieves state-of-the-art performance. It reaches comparable performance in Dice score to Zhu et al [11] but with only 15% of training data, which shows that S-Net has stronger feature representation capability. Moreover, S-Net has much better result in terms of 95HD, because outliers are alleviated by enlarging the receptive field.…”
Section: Experiments On Miccai'15 Datasetmentioning
confidence: 65%
“…We compared the highest score from the top four teams in MICCAI 2015 challenge [6]. For the result of Zhu et al [11], it should be noted that they used 38 samples provided by the MICCAI 2015 Challenge combined with additional 216 samples for training.…”
Section: Experiments On Miccai'15 Datasetmentioning
confidence: 99%
“…With this number of classes, the obtained IOU value went from 40-45 % to 75-76 % and stopped. One problem with this approach is the data balance [31], i.e. , the existence, in the samples, of more annotations from the rock class in comparison with the microfossils.…”
Section: Resultsmentioning
confidence: 99%