A crucial part of image-guided therapy is registration of preoperative and intraoperative images, by which the precise position and orientation of the patient's anatomy is determined in three dimensions. This paper presents a novel approach to register three-dimensional (3-D) computed tomography (CT) or magnetic resonance (MR) images to one or more two-dimensional (2-D) X-ray images. The registration is based solely on the information present in 2-D and 3-D images. It does not require fiducial markers, intraoperative X-ray image segmentation, or timely construction of digitally reconstructed radiographs. The originality of the approach is in using normals to bone surfaces, preoperatively defined in 3-D MR or CT data, and gradients of intraoperative X-ray images at locations defined by the X-ray source and 3-D surface points. The registration is concerned with finding the rigid transformation of a CT or MR volume, which provides the best match between surface normals and back projected gradients, considering their amplitudes and orientations. We have thoroughly validated our registration method by using MR, CT, and X-ray images of a cadaveric lumbar spine phantom for which "gold standard" registration was established by means of fiducial markers, and its accuracy assessed by target registration error. Volumes of interest, containing single vertebrae L1-L5, were registered to different pairs of X-ray images from different starting positions, chosen randomly and uniformly around the "gold standard" position. CT/X-ray (MR/ X-ray) registration, which is fast, was successful in more than 91% (82% except for L1) of trials if started from the "gold standard" translated or rotated for less than 6 mm or 17 degrees (3 mm or 8.6 degrees), respectively. Root-mean-square target registration errors were below 0.5 mm for the CT to X-ray registration and below 1.4 mm for MR to X-ray registration.
In the past few years, a number of two-dimensional (2-D) to three-dimensional (3-D) (2-D-3-D) registration algorithms have been introduced. However, these methods have been developed and evaluated for specific applications, and have not been directly compared. Understanding and evaluating their performance is therefore an open and important issue. To address this challenge we introduce a standardized evaluation methodology, which can be used for all types of 2-D-3-D registration methods and for different applications and anatomies. Our evaluation methodology uses the calibrated geometry of a 3-D rotational X-ray (3DRX) imaging system (Philips Medical Systems, Best, The Netherlands) in combination with image-based 3-D-3-D registration for attaining a highly accurate gold standard for 2-D X-ray to 3-D MR/CT/3DRX registration. Furthermore, we propose standardized starting positions and failure criteria to allow future researchers to directly compare their methods. As an illustration, the proposed methodology has been used to evaluate the performance of two 2-D-3-D registration techniques, viz. a gradient-based and an intensity-based method, for images of the spine. The data and gold standard transformations are available on the internet (http://www.isi.uu.nl/Research/Databases/).
SummaryBecause of the inherent imperfections of the image formation process, microscopical images are often corrupted by spurious intensity variations. This phenomenon, known as shading or intensity inhomogeneity, may have an adverse affect on automatic image processing, such as segmentation and registration. Shading correction methods may be prospective or retrospective. The former require an acquisition protocol tuned to shading correction, whereas the latter can be applied to any image, because they only use the information already present in an image. Nine retrospective shading correction methods were implemented, evaluated and compared on three sets of differently structured synthetic shaded and shadingfree images and on three sets of real microscopical images acquired by different acquisition set-ups. The performance of a method was expressed quantitatively by the coefficient of joint variations between two different object classes. The results show that all methods, except the entropy minimization method, work well for certain images, but perform poorly for others. The entropy minimization method outperforms the other methods in terms of reduction of true intensity variations and preservation of intensity characteristics of shadingfree images. The strength of the entropy minimization method is especially apparent when applied to images containing large-scale objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.