A new approximate method to calculate exchange-correlation contributions in the framework of first-principles tight-binding molecular dynamics methods has been developed. In the proposed scheme on-site (off-site) exchange-correlation matrix elements are expressed as a one-center (twocenter) term plus a correction due to the rest of the atoms. The one-center (two-center) term is evaluated directly, while the correction is calculated using a variation of the Sankey-Niklewski[1] approach generalized for arbitrary atomic-like basis sets. The proposed scheme for exchangecorrelation part permits the accurate and computationally efficient calculation of corresponding tight-binding matrices and atomic forces for complex systems. We calculate bulk properties of selected transition (W,Pd), noble (Au) or simple (Al) metals, a semiconductor (Si) and the transition metal oxide TiO 2 with the new method to demonstrate its flexibility and good accuracy.
Purpose
To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network.
Methods
One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel‐to‐pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten‐fold cross validation was applied to verify model robustness. Paired CT–CBCT scans from an additional 15 pelvic patients and 10 head‐and‐neck (HN) patients with CBCT images collected at a different machine were used for independent testing purpose. Besides the proposed method above, other network architectures were also tested as: 2D vs 2.5D; GAN model with vs without feature mapping; GAN model with vs without additional perceptual loss; and previously reported models as U‐net and cycleGAN with or without identity loss. Image quality of deep‐learning generated synthetic CT (sCT) images was quantitatively compared against the reference CT (rCT) image using mean absolute error (MAE) of Hounsfield units (HU) and peak signal‐to‐noise ratio (PSNR). The dosimetric calculation accuracy was further evaluated with both photon and proton beams.
Results
The deep‐learning generated sCTs showed improved image quality with reduced artifact distortion and improved soft tissue contrast. The proposed algorithm of 2.5 Pix2pix GAN with feature matching (FM) was shown to be the best model among all tested methods producing the highest PSNR and the lowest MAE to rCT. The dose distribution demonstrated a high accuracy in the scope of photon‐based planning, yet more work is needed for proton‐based treatment. Once the model was trained, it took 11–12 ms to process one slice, and could generate a 3D volume of dCBCT (80 slices) in less than a second using a NVIDIA GeForce GTX Titan X GPU (12 GB, Maxwell architecture).
Conclusion
The proposed deep learning algorithm is promising to improve CBCT image quality in an efficient way, thus has a potential to support online CBCT‐based adaptive radiotherapy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.