PurposeTo commission an open source Monte Carlo (MC) dose engine, “MCsquare” for a synchrotron‐based proton machine, integrate it into our in‐house C++‐based I/O user interface and our web‐based software platform, expand its functionalities, and improve calculation efficiency for intensity‐modulated proton therapy (IMPT).MethodsWe commissioned MCsquare using a double Gaussian beam model based on in‐air lateral profiles, integrated depth dose of 97 beam energies, and measurements of various spread‐out Bragg peaks (SOBPs). Then we integrated MCsquare into our C++‐based dose calculation code and web‐based second check platform “DOSeCHECK.” We validated the commissioned MCsquare based on 12 different patient geometries and compared the dose calculation with a well‐benchmarked GPU‐accelerated MC (gMC) dose engine. We further improved the MCsquare efficiency by employing the computed tomography (CT) resampling approach. We also expanded its functionality by adding a linear energy transfer (LET)‐related model‐dependent biological dose calculation.ResultsDifferences between MCsquare calculations and SOBP measurements were <2.5% (<1.5% for ~85% of measurements) in water. The dose distributions calculated using MCsquare agreed well with the results calculated using gMC in patient geometries. The average 3D gamma analysis (2%/2 mm) passing rates comparing MCsquare and gMC calculations in the 12 patient geometries were 98.0 ± 1.0%. The computation time to calculate one IMPT plan in patients’ geometries using an inexpensive CPU workstation (Intel Xeon E5‐2680 2.50 GHz) was 2.3 ± 1.8 min after the variable resolution technique was adopted. All calculations except for one craniospinal patient were finished within 3.5 min.ConclusionsMCsquare was successfully commissioned for a synchrotron‐based proton beam therapy delivery system and integrated into our web‐based second check platform. After adopting CT resampling and implementing LET model‐dependent biological dose calculation capabilities, MCsquare will be sufficiently efficient and powerful to achieve Monte Carlo‐based and LET‐guided robust optimization in IMPT, which will be done in the future studies.
Purpose One of the main sources of uncertainty in proton therapy is the conversion of the Hounsfield Units of the planning CT to (relative) proton stopping powers. Proton radiography provides range error maps but these can be affected by other sources of errors as well as the CT conversion (e.g., residual misalignment). To better understand and quantify range uncertainty, it is desirable to measure the individual contributions and particularly those associated to the CT conversion. Methods A workflow is proposed to carry out an assessment of the CT conversion solely on the basis of proton radiographs of real tissues measured with a multilayer ionization chamber (MLIC). The workflow consists of a series of four stages: (a) CT and proton radiography acquisitions, (b) CT and proton radiography registration in postprocessing, (c) sample‐specific validation of the semi‐empirical model both used in the registration and to estimate the water equivalent path length (WEPL), and (d) WEPL error estimation. The workflow was applied to a pig head as part of the validation of the CT calibration of the proton therapy center PARTICLE at UZ Leuven, Belgium. Results The CT conversion‐related uncertainty computed based on the well‐established safety margin rule of 1.2 mm + 2.4% were overestimated by 71% on the pig head. However, the range uncertainty was very much underestimated where cavities were encountered by the protons. Excluding areas with cavities, the overestimation of the uncertainty was 500%. A correlation was found between these localized errors and HUs between −1000 and −950, suggesting that the underestimation was not a consequence of an inaccurate conversion but was probably rather due to the resolution of the CT leading to material mixing at interfaces. To reduce these errors, the CT calibration curve was adapted by increasing the HU interval corresponding to the air up to −950. Conclusion The application of the workflow as part of the validation of the CT conversion to RSPs showed an overall overestimation of the expected uncertainty. Moreover, the largest WEPL errors were found to be related to the presence of cavities which nevertheless are associated with low WEPL values. This suggests that the use of this workflow on patients or in a generalized study on different types of animal tissues could shed sufficient light on how the contributions to the CT conversion‐related uncertainty add up to potentially reduce up to several millimeters the uncertainty estimations taken into account in treatment planning. All the algorithms required to perform the workflow were implemented in the computational tool named openPR which is part of openREGGUI, an open‐source image processing platform for adaptive proton therapy.
For radiation therapy, it is crucial to ensure that the delivered dose matches the planned dose. Errors in the dose calculations done in the treatment planning system (TPS), treatment delivery errors, other software bugs or data corruption during transfer might lead to significant differences between predicted and delivered doses. As such, patient specific quality assurance (QA) of dose distributions, through experimental validation of individual fields, is necessary. These measurement based approaches, however, are performed with 2D detectors, with limited resolution and in a water phantom. Moreover, they are work intensive and often impose a bottleneck to treatment efficiency. In this work, we investigated the potential to replace measurement-based approach with a simulation-based patient specific QA using a Monte Carlo (MC) code as independent dose calculation engine in combination with treatment log files. Our developed QA platform is composed of a web interface, servers and computation scripts, and is capable to autonomously launch simulations, identify and report dosimetric inconsistencies. To validate the beam model of independent MC engine, in-water simulations of mono-energetic layers and 30 SOBP-type dose distributions were performed. Average Gamma passing ratio 99 ± 0.5% for criteria 2%/2 mm was observed. To demonstrate feasibility of the proposed approach, 10 clinical cases such as head and neck, intracranial indications and craniospinal axis, were retrospectively evaluated via the QA platform. The results obtained via QA platform were compared to QA results obtained by measurement-based approach. This comparison demonstrated consistency between the methods, while the proposed approach significantly reduced in-room time required for QA procedures.
Robust optimization is a computational expensive process resulting in long plan computation times. This issue is especially critical for moving targets as these cases need a large number of uncertainty scenarios to robustly optimize their treatment plans. In this study, we propose a novel worst-case robust optimization algorithm, called dynamic minimax, that accelerates the conventional minimax optimization. Dynamic minimax optimization aims at speeding up the plan optimization process by decreasing the number of evaluated scenarios in the optimization. Methods: For a given pool of scenarios (e.g., 63 = 7 setup 9 3 range 9 3 breathing phases), the proposed dynamic minimax algorithm only considers a reduced number of candidate-worst scenarios, selected from the full 63 scenario set. These scenarios are updated throughout the optimization by randomly sampling new scenarios according to a hidden variable P, called the "probability acceptance function," which associates with each scenario the probability of it being selected as the worst case. By doing so, the algorithm favors scenarios that are mostly "active," that is, frequently evaluated as the worst case. Additionally, unconsidered scenarios have the possibility to be re-considered, later on in the optimization, depending on the convergence towards a particular solution. The proposed algorithm was implemented in the open-source robust optimizer MIROpt and tested for six four-dimensional (4D) IMPT lung tumor patients with various tumor sizes and motions. Treatment plans were evaluated by performing comprehensive robustness tests (simulating range errors, systematic setup errors, and breathing motion) using the open-source Monte Carlo dose engine MCsquare. Results: The dynamic minimax algorithm achieved an optimization time gain of 84%, on average. The dynamic minimax optimization results in a significantly noisier optimization process due to the fact that more scenarios are accessed in the optimization. However, the increased noise level does not harm the final quality of the plan. In fact, the plan quality is similar between dynamic and conventional minimax optimization with regard to target coverage and normal tissue sparing: on average, the difference in worst-case D95 is 0.2 Gy and the difference in mean lung dose and mean heart dose is 0.4 and 0.1 Gy, respectively (evaluated in the nominal scenario). Conclusions: The proposed worst-case 4D robust optimization algorithm achieves a significant optimization time gain of 84%, without compromising target coverage or normal tissue sparing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.