The ability of dual-energy computed-tomographic (CT) systems to determine the concentration of constituent materials in a mixture, known as material decomposition, is the basis for many of dual-energy CT's clinical applications. However, the complex composition of tissues and organs in the human body poses a challenge for many material decomposition methods, which assume the presence of only two, or at most three, materials in the mixture. We developed a flexible, model-based method that extends dual-energy CT's core material decomposition capability to handle more complex situations, in which it is necessary to disambiguate among and quantify the concentration of a larger number of materials. The proposed method, named multi-material decomposition (MMD), was used to develop two image analysis algorithms. The first was virtual unenhancement (VUE), which digitally removes the effect of contrast agents from contrast-enhanced dual-energy CT exams. VUE has the ability to reduce patient dose and improve clinical workflow, and can be used in a number of clinical applications such as CT urography and CT angiography. The second algorithm developed was liver-fat quantification (LFQ), which accurately quantifies the fat concentration in the liver from dual-energy CT exams. LFQ can form the basis of a clinical application targeting the diagnosis and treatment of fatty liver disease. Using image data collected from a cohort consisting of 50 patients and from phantoms, the application of MMD to VUE and LFQ yielded quantitatively accurate results when compared against gold standards. Furthermore, consistent results were obtained across all phases of imaging (contrast-free and contrast-enhanced). This is of particular importance since most clinical protocols for abdominal imaging with CT call for multi-phase imaging. We conclude that MMD can successfully form the basis of a number of dual-energy CT image analysis algorithms, and has the potential to improve the clinical utility of dual-energy CT in disease management.
ÐThis paper addresses the problem of motion estimation from profiles (also known as apparent contours) of an object rotating on a turntable in front of a single camera. Its main contribution is the development of a practical and accurate technique for solving this problem from profiles alone, which is precise enough to allow for the reconstruction of the shape of the object. No correspondences between points or lines are necessary, although the method proposed can be used equally when these features are available without any further adaptation. Symmetry properties of the surface of revolution swept out by the rotating object are exploited to obtain the image of the rotation axis and the homography relating epipolar lines in two views in a robust and elegant way. These, together with geometric constraints for images of rotating objects, are then used to obtain first the image of the horizon, which is the projection of the plane that contains the camera centers, and then the epipoles, thus fully determining the epipolar geometry of the image sequence. The estimation of the epipolar geometry by this sequential approach (image of rotation axisÐhomographyÐimage of the horizonÐepipoles) avoids many of the problems usually found in other algorithms for motion recovery from profiles. In particular, the search for the epipoles, by far the most critical step, is carried out as a simple one-dimensional optimization problem. The initialization of the parameters is trivial and completely automatic for all stages of the algorithm. After the estimation of the epipolar geometry, the Euclidean motion is recovered using the fixed intrinsic parameters of the camera obtained either from a calibration grid or from self-calibration techniques. Finally, the spinning object is reconstructed from its profiles using the motion estimated in the previous stage. Results from real data are presented, demonstrating the efficiency and usefulness of the proposed methods.
In the context of visual surveillance of human activity, knowledge about a camera's internal and external parameters is useful, as it allows for the establishment of a connection between image and world measurements. Unfortunately, calibration information is rarely available and difficult to obtain after a surveillance system has been installed. In this paper a method for camera autocalibration based on information gathered by tracking people is developed. It brings two main contributions: first, we show how a foot-to-head plane homology can be used to obtain the calibration parameters and then we show an approach how to efficiently estimate initial parameter estimates from measurements; second, we present a Bayesian solution to the calibration problem that can elegantly handle measurement uncertainties, outliers, as well as prior information. It is shown how the full posterior distribution of calibration parameters given the measurements can be estimated, which allows making statements about the accuracy of both the calibration parameters and the measurements involving them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.