The reduction of metal artifacts in computed tomography (CT) images, specifically for strong artifacts generated from multiple metal objects, is a challenging issue in medical imaging research. Although there have been some studies on supervised metal artifact reduction through the learning of synthesized artifacts, it is difficult for simulated artifacts to cover the complexity of the real physical phenomena that may be observed in X-ray propagation. In this paper, we introduce metal artifact reduction methods based on an unsupervised volume-to-volume translation learned from clinical CT images. We construct three-dimensional adversarial nets with a regularized loss function designed for metal artifacts from multiple dental fillings. The results of experiments using a CT volume database of 361 patients demonstrate that the proposed framework has an outstanding capacity to reduce strong artifacts and to recover underlying missing voxels, while preserving the anatomical features of soft tissues and tooth structures from the original images.
We concluded our techniques perform mouse-based, direct drilling of complex 3D regions with high-quality rendering of drilled boundaries and contribute to preoperative planning of Microendoscopic Discectomy.
Background We investigated the geometric and dosimetric impact of three-dimensional (3D) generative adversarial network (GAN)-based metal artifact reduction (MAR) algorithms on volumetric-modulated arc therapy (VMAT) and intensity-modulated proton therapy (IMPT) for the head and neck region, based on artifact-free computed tomography (CT) volumes with dental fillings. Methods Thirteen metal-free CT volumes of the head and neck regions were obtained from The Cancer Imaging Archive. To simulate metal artifacts on CT volumes, we defined 3D regions of the teeth for pseudo-dental fillings from the metal-free CT volumes. HU values of 4000 HU were assigned to the selected teeth region of interest. Two different CT volumes, one with four (m4) and the other with eight (m8) pseudo-dental fillings, were generated for each case. These CT volumes were used as the Reference. CT volumes with metal artifacts were then generated from the Reference CT volumes (Artifacts). On the Artifacts CT volumes, metal artifacts were manually corrected for using the water density override method with a value of 1.0 g/cm3 (Water). By contrast, the CT volumes with reduced metal artifacts using 3D GAN model extension of CycleGAN were also generated (GAN-MAR). The structural similarity (SSIM) index within the planning target volume was calculated as quantitative error metric between the Reference CT volumes and the other volumes. After creating VMAT and IMPT plans on the Reference CT volumes, the reference plans were recalculated for the remaining CT volumes. Results The time required to generate a single GAN-MAR CT volume was approximately 30 s. The median SSIMs were lower in the m8 group than those in the m4 group, and ANOVA showed a significant difference in the SSIM for the m8 group (p < 0.05). Although the median differences in D98%, D50% and D2% were larger in the m8 group than the m4 group, those from the reference plans were within 3% for VMAT and 1% for IMPT. Conclusions The GAN-MAR CT volumes generated in a short time were closer to the Reference CT volumes than the Water and Artifacts CT volumes. The observed dosimetric differences compared to the reference plan were clinically acceptable.
In endoscopic surgery, it is necessary to understand the three-dimensional structure of the target region to improve safety. For organs that do not deform much during surgery, preoperative computed tomography (CT) images can be used to understand their three-dimensional structure, however, deformation estimation is necessary for organs that deform substantially. Even though the intraoperative deformation estimation of organs has been widely studied, two-dimensional organ region segmentations from camera images are necessary to perform this estimation. In this paper, we propose a region segmentation method using U-net for the lung, which is an organ that deforms substantially during surgery. Because the accuracy of the results for smoker lungs is lower than that for non-smoker lungs, we improved the accuracy by translating the texture of the lung surface using a CycleGAN.
The aim of this study was to evaluate generalization ability of segmentation accuracy for limited FOV CBCT in the male pelvic region using a full-image CNN. Auto-segmentation accuracy was evaluated using various datasets with different intensity distributions and FOV sizes. Methods: A total of 171 CBCT datasets from patients with prostate cancer were enrolled. There were 151, 10, and 10 CBCT datasets acquired from Vero4DRT, TrueBeam STx, and Clinac-iX, respectively. The FOV for Vero4DRT, TrueBeam STx, and Clinac-iX was 20, 26, and 25 cm, respectively. The ROIs, including the bladder, prostate, rectum, and seminal vesicles, were manually delineated. The U 2 -Net CNN network architecture was used to train the segmentation model. A total of 131 limited FOV CBCT datasets from Vero4DRT were used for training (104 datasets) and validation (27 datasets); thereafter the rest were for testing. The training routine was set to save the best weight values when the DSC in the validation set was maximized. Segmentation accuracy was qualitatively and quantitatively evaluated between the ground truth and predicted ROIs in the different testing datasets. Results:The mean scores ± standard deviation of visual evaluation for bladder, prostate, rectum, and seminal vesicle in all treatment machines were 1.0 ± 0.7, 1.5 ± 0.6, 1.4 ± 0.6, and 2.1 ± 0.8 points, respectively. The median DSC values for all imaging devices were ≥0.94 for the bladder, 0.84-0.87 for the prostate and rectum, and 0.48-0.69 for the seminal vesicles. Although the DSC values for the bladder and seminal vesicles were significantly different among the three imaging devices, the DSC value of the bladder changed by less than 1% point. The median MSD values for all imaging devices were ≤1.2 mm for the bladder and 1.4-2.2 mm for the prostate, rectum, and seminal vesicles. The MSD values for the seminal vesicles were significantly different between the three imaging devices. Conclusion:The proposed method is effective for testing datasets with different intensity distributions and FOV from training datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.