In X-ray imaging, it is common practice to normalize the acquired projection data with averaged flat fields taken prior to the scan. Unfortunately, due to source instabilities, vibrating beamline components such as the monochromator, time varying detector properties, or other confounding factors, flat fields are often far from stationary, resulting in significant systematic errors in intensity normalization. In this work, a simple and efficient method is proposed to account for dynamically varying flat fields. Through principal component analysis of a set of flat fields, eigen flat fields are computed. A linear combination of the most important eigen flat fields is then used to individually normalize each Xray projection. Experiments show that the proposed dynamic flat field correction leads to a substantial reduction of systematic errors in projection intensity normalization compared to conventional flat field correction. DOI Abstract:In X-ray imaging, it is common practice to normalize the acquired projection data with averaged flat fields taken prior to the scan. Unfortunately, due to source instabilities, vibrating beamline components such as the monochromator, time varying detector properties, or other confounding factors, flat fields are often far from stationary, resulting in significant systematic errors in intensity normalization. In this work, a simple and efficient method is proposed to account for dynamically varying flat fields. Through principal component analysis of a set of flat fields, eigen flat fields are computed. A linear combination of the most important eigen flat fields is then used to individually normalize each X-ray projection. Experiments show that the proposed dynamic flat field correction leads to a substantial reduction of systematic errors in projection intensity normalization compared to conventional flat field correction.
Abstract-The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique.In this paper, a new iterative CT reconstruction algorithm for improved temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. Firstly, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Secondly, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary.Quantitative and qualitative results on different simulation experiments and a real neutron tomography dataset show that, in comparison to state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, temporal resolution can be substantially increased and thus fluid flow experiments with faster dynamics can be performed.
4D computed tomography (4D-CT) aims to visualise the temporal dynamics of a 3D sample with a sufficiently high temporal and spatial resolution. Successive time frames are typically obtained by sequential scanning, followed by independent reconstruction of each 3D dataset. Such an approach requires a large number of projections for each scan to obtain images with sufficient quality (in terms of artefacts and SNR). Hence, there is a clear trade-off between the rotation speed of the gantry (i.e. time resolution) and the quality of the reconstructed images. In this paper, the MotionVector-based Iterative Technique (MoVIT) is introduced which reconstructs a particular time frame by including the projections of neighbouring time frames as well. It is shown that such a strategy improves the trade-off between the rotation speed and the SNR. The framework is tested on both numerical simulations and on 4D X-ray CT datasets of polyurethane foam under compression. Results show that reconstructions obtained with MoVIT have a significantly higher SNR compared to the SNR of conventional 4D reconstructions.
In computed tomography (CT), motion and deformation during the acquisition lead to streak artefacts and blurring in the reconstructed images. To remedy these artefacts, we introduce an efficient algorithm to estimate and correct for global affine deformations directly on the cone beam projections. The proposed technique is data driven and thus removes the need for markers and/or a tracking system. A relationship between affine transformations and the cone beam transform is proved and used to correct the projections. The deformation parameters that describe deformation perpendicular to the projection direction are estimated for each projection by minimizing a plane-based inconsistency criterion. The criterion compares each projection of the main scan with all projections of a fast reference scan, which is acquired prior or posterior to the main scan. Experiments with simulated and experimental data show that the proposed affine deformation estimation method is able to substantially reduce motion artefacts in cone beam CT images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.