2022
DOI: 10.1038/s41467-022-33178-z
|View full text |Cite
|
Sign up to set email alerts
|

Comprehensive and clinically accurate head and neck cancer organs-at-risk delineation on a multi-institutional study

Abstract: Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OA… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 42 publications
0
12
0
Order By: Relevance
“…27 The reported interobserver variability serves as a baseline performance that any auto-segmentation tool should aim to surpass. An efficient tool should, in practice, outperform the interobserver variability by leveraging the ability of deep learning models [18][19][20][21][22][23][24][25][26] to learn from multiple contouring styles, and generate segmentation masks that are more representative of both observers. Furthermore, the investigation of the intermodality variability provides important information regarding the visibility and distinguishability of OARs as represented by different imaging modalities.…”
Section: Implications For Auto-segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…27 The reported interobserver variability serves as a baseline performance that any auto-segmentation tool should aim to surpass. An efficient tool should, in practice, outperform the interobserver variability by leveraging the ability of deep learning models [18][19][20][21][22][23][24][25][26] to learn from multiple contouring styles, and generate segmentation masks that are more representative of both observers. Furthermore, the investigation of the intermodality variability provides important information regarding the visibility and distinguishability of OARs as represented by different imaging modalities.…”
Section: Implications For Auto-segmentationmentioning
confidence: 99%
“…[6][7][8][9] To reduce observer variability for OARs in the HaN region, several contouring guidelines 4,10 have been published, as well as initiatives have been recently launched to quantify observer variability [10][11][12] and provide quality assurance. [13][14][15][16] On the other hand, automated contouring (i.e., automated segmentation, auto-segmentation) performed by computerassisted algorithms 17 has witnessed a revival with the introduction and integration of artificial intelligence approaches, such as deep learning, [18][19][20][21][22][23][24][25][26] which has outperformed the previously established atlas-based auto-segmentation. 27 As a result, computational challenges were organized to evaluate the quality of auto-segmentation results, 28 and several datasets were made publicly available for benchmarking different auto-segmentation methodologies 20,[28][29][30][31] and evaluating their clinical acceptability.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In past clinical practices, the delineation of OARs and GTVs in NPC was predominantly conducted by experienced radiation oncologists. However, according to the clinical treatment guideline, each patient has more than 40 OARs and 2 GTVs need to be delineated accurately (Ye et al, 2022;Guo et al, 2020). It requires the radiation oncologists to spend much time performing delineation, increasing the annotator's burden and patient waiting time.…”
Section: Clinical Backgroundmentioning
confidence: 99%
“…Multi-organ segmentation has been extensively studied in the medical imaging because of its core importance for many downstream tasks, such as quantitative disease analysis [23], computer-aided diagnosis [44], and cancer radiotherapy planning [25,59]. With the emergence of many dedicately labeled organ datasets [2] and the fast developments in deep learning segmentation techniques [22], deep segmentation networks trained on specific datasets achieve comparable performance with human observers [48,51,59]. However, this setup can have serious limitations in practical deployment for clinical applications.…”
Section: Introductionmentioning
confidence: 99%