With the advancement of treatment modalities in radiation therapy for cancer patients, outcomes have improved, but at the cost of increased treatment plan complexity and planning time. The accurate prediction of dose distributions would alleviate this issue by guiding clinical plan optimization to save time and maintain high quality plans. We have modified a convolutional deep network model, U-net (originally designed for segmentation purposes), for predicting dose from patient image contours of the planning target volume (PTV) and organs at risk (OAR). We show that, as an example, we are able to accurately predict the dose of intensity-modulated radiation therapy (IMRT) for prostate cancer patients, where the average Dice similarity coefficient is 0.91 when comparing the predicted vs. true isodose volumes between 0% and 100% of the prescription dose. The average value of the absolute differences in [max, mean] dose is found to be under 5% of the prescription dose, specifically for each structure is [1.80%, 1.03%](PTV), [1.94%, 4.22%](Bladder), [1.80%, 0.48%](Body), [3.87%, 1.79%](L Femoral Head), [5.07%, 2.55%](R Femoral Head), and [1.26%, 1.62%](Rectum) of the prescription dose. We thus managed to map a desired radiation dose distribution from a patient’s PTV and OAR contours. As an additional advantage, relatively little data was used in the techniques and models described in this paper.
In this paper, we present a fully automatic, fast and accurate deformable registration technique. This technique deals with free-form deformation. It minimizes an energy functional that combines both similarity and smoothness measures. By using calculus of variations, the minimization problem was represented as a set of nonlinear elliptic partial differential equations (PDEs). A Gauss-Seidel finite difference scheme is used to iteratively solve the PDE. The registration is refined by a multi-resolution approach. The whole process is fully automatic. It takes less than 3 min to register two three-dimensional (3D) image sets of size 256 x 256 x 61 using a single 933 MHz personal computer. Extensive experiments are presented. These experiments include simulations, phantom studies and clinical image studies. Experimental results show that our model and algorithm are suited for registration of temporal images of a deformable body. The registration of inspiration and expiration phases of the lung images shows that the method is able to deal with large deformations. When applied to the daily CT images of a prostate patient, the results show that registration based on iterative refinement of displacement field is appropriate to describe the local deformations in the prostate and the rectum. Similarity measures improved significantly after the registration. The target application of this paper is for radiotherapy treatment planning and evaluation that incorporates internal organ deformation throughout the course of radiation therapy. The registration method could also be equally applied in diagnostic radiology.
Purpose
The use of neural networks to directly predict three‐dimensional dose distributions for automatic planning is becoming popular. However, the existing methods use only patient anatomy as input and assume consistent beam configuration for all patients in the training database. The purpose of this work was to develop a more general model that considers variable beam configurations in addition to patient anatomy to achieve more comprehensive automatic planning with a potentially easier clinical implementation, without the need to train specific models for different beam settings.
Methods
The proposed anatomy and beam (AB) model is based on our newly developed deep learning architecture, and hierarchically densely connected U‐Net (HD U‐Net), which combines U‐Net and DenseNet. The AB model contains 10 input channels: one for beam setup and the other 9 for anatomical information (PTV and organs). The beam setup information is represented by a 3D matrix of the non‐modulated beam’s eye view ray‐tracing dose distribution. We used a set of images from 129 patients with lung cancer treated with IMRT with heterogeneous beam configurations (4–9 beams of various orientations) for training/validation (100 patients) and testing (29 patients). Mean squared error was used as the loss function. We evaluated the model’s accuracy by comparing the mean dose, maximum dose, and other relevant dose–volume metrics for the predicted dose distribution against those of the clinically delivered dose distribution. Dice similarity coefficients were computed to address the spatial correspondence of the isodose volumes between the predicted and clinically delivered doses. The model was also compared with our previous work, the anatomy only (AO) model, which does not consider beam setup information and uses only 9 channels for anatomical information.
Results
The AB model outperformed the AO model, especially in the low and medium dose regions. In terms of dose–volume metrics, AB outperformed AO by about 1–2%. The largest improvement was found to be about 5% in lung volume receiving a dose of 5Gy or more (V5). The improvement for spinal cord maximum dose was also important, that is, 3.6% for cross‐validation and 2.6% for testing. The AB model achieved Dice scores for isodose volumes as much as 10% higher than the AO model in low and medium dose regions and about 2–5% higher in high dose regions.
Conclusions
The AO model, which does not use beam configuration as input, can still predict dose distributions with reasonable accuracy in high dose regions but introduces large errors in low and medium dose regions for IMRT cases with variable beam numbers and orientations. The proposed AB model outperforms the AO model substantially in low and medium dose regions, and slightly in high dose regions, by considering beam setup information through a cumulative non‐modulated beam’s eye view ray‐tracing dose distribution. This new model represents a major step forward towards predicting 3D dose distributions in real clinical practices, where beam configu...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.