2022
DOI: 10.1186/s13014-022-01993-9
|View full text |Cite
|
Sign up to set email alerts
|

Clinical evaluation of two AI models for automated breast cancer plan generation

Abstract: Background Artificial intelligence (AI) shows great potential to streamline the treatment planning process. However, its clinical adoption is slow due to the limited number of clinical evaluation studies and because often, the translation of the predicted dose distribution to a deliverable plan is lacking. This study evaluates two different, deliverable AI plans in terms of their clinical acceptability based on quantitative parameters and qualitative evaluation by four radiation oncologists. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…As our results show similar values for both DL and KBP solutions we wish to highlight the recommendation that departments should weigh differences as being clinically relevant or not. Concerning potential limitations, we acknowledge that the use of anisotropic analytical algorithm in our study versus the use of collapsed cone convolution in the works of Kneepkens et al [26] may cause differences in the results. However, our comparison replicates a daily situation with commercially available software in use.…”
Section: Discussionmentioning
confidence: 92%
“…As our results show similar values for both DL and KBP solutions we wish to highlight the recommendation that departments should weigh differences as being clinically relevant or not. Concerning potential limitations, we acknowledge that the use of anisotropic analytical algorithm in our study versus the use of collapsed cone convolution in the works of Kneepkens et al [26] may cause differences in the results. However, our comparison replicates a daily situation with commercially available software in use.…”
Section: Discussionmentioning
confidence: 92%
“…In a future study, the dosimetric impact of this auto-segmentation model could be evaluated. In order to perform such an analysis without any interobserver bias, ideally this should be done by the use of automatically generated treatments plans [21] , [22] . Moreover, these plans should be based on predefined clinical goals which are widely accepted to make such a comparison more generally useful [23] .…”
Section: Discussionmentioning
confidence: 99%
“…Few studies used randomised trial designs, opting instead for designs such as weaker historical case controls. Accuracy was most commonly measured, although a few studies examined safety and clinician time (e.g., [44,46,53]). Effects on care-delivery were assessed using a variety of measures including time to treatment (e.g., [29,39,67,68]).…”
Section: Discussionmentioning
confidence: 99%
“…For prostate radiotherapy, Cha et al [45] demonstrated clinical utility of AI for MRbased planning with 65% of cases requiring no more than minor edits, and a time saving of 12 min (30% of total contouring time) for physicians. Kneepkens et al [46] found that although automatically generated plans resulted in slightly higher doses, they were clinically acceptable (AI: 90-95% vs. manual: 90%) and time-efficient.…”
Section: Radiotherapymentioning
confidence: 99%