Purpose
The review of a radiation therapy plan by a physicist prior to treatment is a standard tool for ensuring the quality of treatments. However, little is known about how well this task is performed in practice. The goal of this study is to present a novel method to measure the effectiveness of physics plan review by introducing simulated errors into computerized “mock” treatment charts and measuring the performance of plan review by physicists.
Methods
We generated six simulated treatment charts containing multiple errors. To select errors, we compiled a list based on events from a departmental incident learning system and an international incident learning system (SAFRON). Seventeen errors with the highest scores for frequency and severity were included in the simulations included six mock treatment charts. Eight physicists reviewed the simulated charts as they would a normal pretreatment plan review, with each chart being reviewed by at least six physicists. There were 113 data points for evaluation. Observer bias was minimized using a simple error vs hidden error approach, using detectability scores for stratification. The confidence interval for the proportion of errors detected was computed using the Wilson score interval.
Results
Simulated errors were detected in 67% of reviews [58–75%] (95% confidence interval [CI] in brackets). Of the errors included in the simulated plans, the following error scenarios had the highest detection rates: an incorrect isocenter in DRR (93% [70–99%]), a planned dose different from the prescribed dose (92% [67–99%]) and invalid QA (85% [58–96%]). Errors with low detection rates included incorrect CT dataset (0%, [0–39%]) and incorrect isocenter localization in planning system (38% [18–64%]). Detection rates of errors from simulated charts were compared against observed detection rates of errors from a departmental incident learning system.
Conclusions
It has been notoriously difficult to quantify error and safety performance in oncology. This study uses a novel technique of simulated errors to quantify performance and suggests that the pretreatment physics plan review identifies some errors with high fidelity while other errors are more challenging to detect. These data will guide future work on standardization and automation. The example process studied here was physics plan review, but this approach of simulated errors may be applied in other contexts as well and may also be useful for training and education purposes.