Current computational methods for light field photography model the ray-tracing geometry inside the plenoptic camera. This representation of the problem, and some common approximations, can lead to errors in the estimation of object sizes and positions. We propose a representation that leads to the correct reconstruction of object sizes and distances to the camera, by showing that light field images can be interpreted as limited angle cone-beam tomography acquisitions. We then quantitatively analyze its impact on image refocusing, depth estimation and volumetric reconstructions, comparing it against other possible representations. Finally, we validate these results with numerical and real-world examples.
Unlike previous works, this open data collection consists of X-ray cone-beam (CB) computed tomography (CT) datasets specifically designed for machine learning applications and high cone-angle artefact reduction. Forty-two walnuts were scanned with a laboratory X-ray set-up to provide not only data from a single object but from a class of objects with natural variability. For each walnut, CB projections on three different source orbits were acquired to provide CB data with different cone angles as well as being able to compute artefact-free, high-quality ground truth images from the combined data that can be used for supervised learning. We provide the complete image reconstruction pipeline: raw projection data, a description of the scanning geometry, pre-processing and reconstruction scripts using open software, and the reconstructed volumes. Due to this, the dataset can not only be used for high cone-angle artefact reduction but also for algorithm development and evaluation for other tasks, such as image reconstruction from limited or sparse-angle (low-dose) scanning, super resolution, or segmentation.
High cone-angle artifacts (HCAAs) appear frequently in circular cone-beam computed tomography (CBCT) images and can heavily affect diagnosis and treatment planning. To reduce HCAAs in CBCT scans, we propose a novel deep learning approach that reduces the three-dimensional (3D) nature of HCAAs to two-dimensional (2D) problems in an efficient way. Specifically, we exploit the relationship between HCAAs and the rotational scanning geometry by training a convolutional neural network (CNN) using image slices that were radially sampled from CBCT scans. We evaluated this novel approach using a dataset of input CBCT scans affected by HCAAs and high-quality artifact-free target CBCT scans. Two different CNN architectures were employed, namely U-Net and a mixed-scale dense CNN (MS-D Net). The artifact reduction performance of the proposed approach was compared to that of a Cartesian slice-based artifact reduction deep learning approach in which a CNN was trained to remove the HCAAs from Cartesian slices. In addition, all processed CBCT scans were segmented to investigate the impact of HCAAs reduction on the quality of CBCT image segmentation. We demonstrate that the proposed deep learning approach with geometry-aware dimension reduction greatly reduces HCAAs in CBCT scans and outperforms the Cartesian slice-based deep learning approach. Moreover, the proposed artifact reduction approach markedly improves the accuracy of the subsequent segmentation task compared to the Cartesian slice-based workflow.
The ability to predict tumor recurrence after chemoradiotherapy of locally advanced cervical cancer is a crucial clinical issue to intensify the treatment of the most high-risk patients. The objective of this study was to investigate tumor metabolism characteristics extracted from pre- and per-treatment F-FDG PET images to predict 3-year overall recurrence (OR). A total of 53 locally advanced cervical cancer patients underwent pre- and per-treatmentF-FDG PET (respectively PET1 and PET2). Tumor metabolism was characterized through several delineations using different thresholds, based on a percentage of the maximum uptake, and applied by region-growing. The SUV distribution in PET1 and PET2 within each segmented region was characterized through 7 intensity and histogram-based parameters, 9 shape descriptors and 16 textural features for a total of 1026 parameters. Predictive capability of the extracted parameters was assessed using the area under the receiver operating curve (AUC) associated to univariate logistic regression models and random forest (RF) classifier. In univariate analyses, 36 parameters were highly significant predictors of 3-year OR (p<;0.01), AUC ranging from 0.72 to 0.83. With RF, the Out-of-Bag (OOB) error rate using the totality of the extracted parameters was 26.42% (AUC=0.72). By recursively eliminating the less important variables, OOB error rate of the RF classifier using the nine most important parameters was 13.21% (AUC=0.90). Results suggest that both pre- and per-treatment F-FDG PET exams provide meaningful information to predict the tumor recurrence. RF classifier is able to handle a very large number of extracted features and allows the combination of the most prognostic parameters to improve the prediction.
Patient-specific dosimetry in nuclear medicine relies on activity quantification in volumes of interest from scintigraphic imaging. Clinical dosimetry protocols have to be benchmarked against results computed from test phantoms. The design of an adequate model is a crucial step for the validation of image-based activity quantification. We propose a computing platform to automatically generate simulated SPECT images from a dynamic phantom for arbitrary scintigraphic image protocols. As regards the image generation, we first use the open-source NCAT phantom code to generate an anatomical model and 3D activity maps for different source compartments. This information is used as input for an image simulator and each source is modelled separately. Then, a compartmental model is designed, which describes interactions between different functional compartments. As a result, we can derive time-activity curves for each compartment with sampling time determined from real image acquisition protocols. Finally, to get an image at a given time after radionuclide injection, the resulting projections are aggregated by scaling the compartment contribution using the specific pharmacokinetics and corrupted by Poisson noise. Our platform consists of many software packages, either in-house developments or open-source codes. In particular, an important part of our work has been to integrate the GATE simulator in our platform, in order to generate automatically the command files needed to run a simulation. Furthermore, some developments were added in the GATE code, to optimize the generation of projections with multiple energy windows in a minimum computation time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.