Purpose Robust optimization is becoming the gold standard for generating robust plans against various kinds of treatment uncertainties. Today, most robust optimization strategies use a pragmatic set of treatment scenarios (the so‐called uncertainty set) consisting of combinations of maximum errors, of each considered uncertainty source (such as tumor motion, setup and image‐conversion errors). This approach presents two key issues. First, a subset of considered scenarios is unnecessarily improbable which could potentially compromise the plan quality. Second, the resulting large uncertainty set leads to long plan computation times, which limits the potential for robust optimization as a standard clinical tool. In order to address these issues, a method is introduced which is able to preselect a limited set of relevant treatment error scenarios. Methods Uncertainties due to systematic setup errors, image‐conversion errors and respiratory tumor motion are considered. A four‐dimensional (4D)‐equiprobability hypersurface is defined, which takes into account the joint probabilities of the above‐mentioned uncertainty sources. Only scenarios that lie on the predefined 4D hypersurface are considered, guaranteeing statistical consistency of the uncertainty set. In this regard, twelve scenarios are selected that cover maximum spatial displacements of the tumor during breathing. Subsequently, additional scenarios are considered (sampled from the aforementioned 4D hypersurface) in order to cover any estimated residual range errors. Two different scenario‐selection procedures were tested: (a) the maximum displacements (MD) method that only considers twelve scaled maximum displacement scenarios and (b) maximum displacements and residual range (MDR) method which, in addition to the scaled maximum displacement scenarios, considers additional maximum range uncertainty scenarios. The methods were tested for five lung cancer patients by performing comprehensive Monte Carlo robustness evaluations. Results A plan computation time gain of 78% is achieved by applying the MD method, whilst obtaining a target robustness of D95 larger than 95% of the prescribed dose, for the worst‐case scenario. Additionally, the MD method has the potential to be fully automatic which makes it a promising candidate for fast automatic planning workflows. The MDR method produced plans with excellent target robustness (D99 larger than 95% of the prescribed dose, even for the worst‐case scenario), whilst still obtaining a significant plan computation time gain of 57%. Conclusions Two scenario‐selection procedures were developed which achieved significant reduction of plan computation time and memory consumption, without compromising plan quality or robustness.
Robust optimization is a computational expensive process resulting in long plan computation times. This issue is especially critical for moving targets as these cases need a large number of uncertainty scenarios to robustly optimize their treatment plans. In this study, we propose a novel worst-case robust optimization algorithm, called dynamic minimax, that accelerates the conventional minimax optimization. Dynamic minimax optimization aims at speeding up the plan optimization process by decreasing the number of evaluated scenarios in the optimization. Methods: For a given pool of scenarios (e.g., 63 = 7 setup 9 3 range 9 3 breathing phases), the proposed dynamic minimax algorithm only considers a reduced number of candidate-worst scenarios, selected from the full 63 scenario set. These scenarios are updated throughout the optimization by randomly sampling new scenarios according to a hidden variable P, called the "probability acceptance function," which associates with each scenario the probability of it being selected as the worst case. By doing so, the algorithm favors scenarios that are mostly "active," that is, frequently evaluated as the worst case. Additionally, unconsidered scenarios have the possibility to be re-considered, later on in the optimization, depending on the convergence towards a particular solution. The proposed algorithm was implemented in the open-source robust optimizer MIROpt and tested for six four-dimensional (4D) IMPT lung tumor patients with various tumor sizes and motions. Treatment plans were evaluated by performing comprehensive robustness tests (simulating range errors, systematic setup errors, and breathing motion) using the open-source Monte Carlo dose engine MCsquare. Results: The dynamic minimax algorithm achieved an optimization time gain of 84%, on average. The dynamic minimax optimization results in a significantly noisier optimization process due to the fact that more scenarios are accessed in the optimization. However, the increased noise level does not harm the final quality of the plan. In fact, the plan quality is similar between dynamic and conventional minimax optimization with regard to target coverage and normal tissue sparing: on average, the difference in worst-case D95 is 0.2 Gy and the difference in mean lung dose and mean heart dose is 0.4 and 0.1 Gy, respectively (evaluated in the nominal scenario). Conclusions: The proposed worst-case 4D robust optimization algorithm achieves a significant optimization time gain of 84%, without compromising target coverage or normal tissue sparing.
The "clinical target distribution" (CTD) has recently been introduced as a promising alternative to the binary clinical target volume (CTV). However, a comprehensive study that considers the CTD, together with geometric treatment uncertainties, was lacking. Because the CTD is inherently a probabilistic concept, this study proposes a fully probabilistic approach that integrates the CTD directly in a robust treatment planning framework. First, the CTD is derived from a reported microscopic tumor infiltration model such that it explicitly features the probability of tumor cell presence in its target definition. Second, two probabilistic robust optimization methods are proposed that evaluate CTD coverage under uncertainty.The first method minimizes the expected-value (EV) over the uncertainty scenarios and the second method minimizes the sum of the expected value and standard deviation (EV-SD), thereby penalizing the spread of the objectives from the mean. Both EV and EV-SD methods introduce the CTD in the objective function by using weighting factors that represent the probability of tumor presence. The probabilistic methods are compared to a conventional worst-case approach that uses the CTV in a worst-case optimization algorithm. To evaluate the treatment plans, a scenario-based evaluation strategy is implemented that combines the effects of microscopic tumor infiltrations with the other geometric uncertainties. The methods are tested for five lung tumor patients, treated with intensity-modulated proton therapy. The results indicate that for the studied patient cases, the probabilistic methods favour the reduction of the esophagus dose but compensate by increasing the high-dose region in a low conflicting organ such as the lung. These results show that a fully probabilistic approach has the potential to obtain clinical benefits when tumor infiltration uncertainties are taken into account directly in the treatment planning process.
Objective: The overarching objective is to make the definition of the clinical target volume (CTV) in radiation oncology less subjective and more scientifically based. The specific objective of this study is to investigate similarities and differences between two methods that model tumor spread beyond the visible gross tumor volume (GTV): 1. The shortest path model, which is the standard method of adding a geometric GTV-CTV margin, and 2. The reaction-diffusion model. Approach: These two models to capture the invisible tumor "fire front" are defined and compared in mathematical terms. The models are applied to geometric example cases that represent tumor spread in non-uniform and anisotropic media with anatomical barriers. Main Results: The two seemingly disparate models bring forth traveling waves that can be associated with the front of tumor growth outward from the GTV. The shape of the fronts is similar for both models. Differences are seen in cases where the diffusive flow is reduced due to anatomical barriers, and in complex spatially non-uniform cases. The diffusion model generally leads to smoother fronts. The smoothness can be controlled with a parameter defined by the ratio of the diffusion coefficient and the proliferation rate. Significance: Defining the CTV has been described as the weakest link of the radiotherapy chain. There are many similarities in the mathematical description and the behavior of the common geometric GTV-CTV expansion method, and the definition of the CTV tumor front via the reaction-diffusion model. Its mechanistic basis and the controllable smoothness make the diffusion model an attractive alternative to the standard GTV-CTV margin model.
Currently, adaptive strategies require time- and resource-intensive manual structure corrections. This study compares different strategies: optimization without manual structure correction, adaptation with physician-drawn structures, and no adaptation. Strategies were compared for 16 patients with pancreas, liver, and head and neck (HN) cancer with 1–5 repeated images during treatment: ‘reference adaptation’, with structures drawn by a physician; ‘single-DIR adaptation’, using a single set of deformably propagated structures; ‘multi-DIR adaptation’, using robust planning with multiple deformed structure sets; ‘conservative adaptation’, using the intersection and union of all deformed structures; ‘probabilistic adaptation’, using the probability of a voxel belonging to the structure in the optimization weight; and ‘no adaptation’. Plans were evaluated using reference structures and compared using a scoring system. The reference adaptation with physician-drawn structures performed best, and no adaptation performed the worst. For pancreas and liver patients, adaptation with a single DIR improved the plan quality over no adaptation. For HN patients, integrating structure uncertainties brought an additional benefit. If resources for manual structure corrections would prevent online adaptation, manual correction could be replaced by a fast ‘plausibility check’, and plans could be adapted with correction-free adaptation strategies. Including structure uncertainties in the optimization has the potential to make online adaptation more automatable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.