Background Traditional methods of radiotherapy positioning have shortcomings such as fragile skin‐markers, additional doses, and lack of information integration. Emerging technologies may provide alternatives for the relevant clinical practice. Purpose To propose a noninvasive radiotherapy positioning system integrating augmented reality (AR) and optical surface, and to evaluate its feasibility in clinical workflow. Methods AR and structured light‐based surface were integrated to implement the coarse‐to‐precise positioning through two coherent steps, the AR‐based coarse guidance and the optical surface‐based precise verification. To implement quality assurance, recognition of face and pattern was used for patient authentication, case association, and accessory validation in AR scenes. The holographic images reconstructed from simulation computed tomography (CT) images, guided the initial posture correction by virtual‐real alignment. The point clouds of body surface were fused, with the calibration and pose estimation of structured light cameras, and segmented according to the preset regions of interest (ROIs). The global‐to‐local registration for cross‐source point clouds was achieved to calculate couch shifts in six degrees‐of‐freedom (DoF), which were ultimately transmitted to AR scenes. The evaluation based on phantom and human‐body (4 volunteers) included, (i) quality assurance workflow, (ii) errors of both steps and correlation analysis, (iii) receiver operating characteristic (ROC), (iv) distance characteristics of accuracy, and (v) clinical positioning efficiency. Results The maximum errors in phantom evaluation were 3.4 ± 2.5 mm in Vrt and 1.4 ± 1.0° in Pitch for the coarse guidance step, while 1.6 ± 0.9 mm in Vrt and 0.6 ± 0.4° in Pitch for the precise verification step. The Pearson correlation coefficients between precise verification and cone beam CT (CBCT) results were distributed in the interval [0.81, 0.85]. In ROC analysis, the areas under the curve (AUC) were 0.87 and 0.89 for translation and rotation, respectively. In human body‐based evaluation, the errors of thorax and abdomen (T&A) were significantly greater than those of head and neck (H&N) in Vrt (2.6 ± 1.1 vs. 1.7 ± 0.8, p < 0.01), Lng (2.3 ± 1.1 vs. 1.4 ± 0.9, p < 0.01), and Rtn (0.8 ± 0.4 vs. 0.6 ± 0.3, p = 0.01) while relatively similar in Lat (1.8 ± 0.9 vs. 1.7 ± 0.8, p = 0.07). The translation displacement range, after coarse guidance step, required for high accuracy of the optical surface component of the integrated system was 0–42 mm, and the average positioning duration of the integrated system was significantly less than that of conventional workflow (355.7 ± 21.7 vs. 387.7 ± 26.6 s, p < 0.01). Conclusions The combination of AR and optical surface has utility and feasibility for patient positioning, in terms of both safety and accuracy.
Purpose: To develop a 3D dose distribution prediction deep learning model for volumetric modulated arc radiotherapy (VMAT) of cervical cancer, and to explore the impact of different multichannel input data on the prediction accuracy, especially to prove the feasibility of dose prediction only based on computed tomography (CT) images and planning target volume (PTV) delineated contours.Methods: A total of 118 VMAT cases were collected, which were made into three datasets with different multichannel combinations. In addition to the clinical dose distribution data occupying one channel, the three datasets were as follows: Dataset-A, 7 channels, included CT images, the PTV and the organs at risk (OARs); Dataset-B, 2 channels included CT images and the PTV; Dataset-C, a single channel, included only CT images. A full-scale feature fusion 3D conditional generative adversarial network (cGAN) based dose distribution prediction architecture was proposed, with multiple losses function used as the optimization target. Under this framework, three models were obtained by training with the three datasets: Model-A, Model-B and Model-C. The following indicators were used to evaluate and compare the performance of the models: (1) the 3D dose difference map and the mean absolute error (MAE); (2) the dose-volume histogram (DVH) curve;(3) the dose index (DI) of the PTV and OARs; (4) the Dice similarity coe cient (DSC).Results: The proposed model accurately predicts the 3D dose distribution. For the twenty test patients, the MAE of Model-A is 1.1±0.2%, the MAE of Model-B and Model-C are 1.4±0.2%, and 1.9±0.3% respectively. There are no signi cant differences between the DIs of PTV (D99%, D98%, D95%, HI and CI) and OARs of Model-A and Model-B from the clinical result. The average DSC of Model-A with different isodose volumes is greater than 0.94. Model-B and Model-C follow with average DSCs greater than 0.91 and 0.86 respectively. Conclusion:We propose a new dose prediction model based on a full-scale fusion and generative adversarial architecture, con rming the feasibility of dose prediction based only on CT images and the PTV. The method proposed provides a simpler and more effective method for clinical dose assessment, radiotherapy planning assistance and automatic planning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.