Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.
Background and purpose Because of the varying structure of dysplastic hips, the optimal realignment of the joint during periacetabular osteotomy (PAO) may differ between patients. Three-dimensional (3D) mechanical and radiological analysis possibly accounts better for patient-specific morphology, and may improve and automate optimal joint realignment.Patients and methods We evaluated the 10-year outcomes of 12 patients following PAO. We compared 3D mechanical analysis results to both radiological and clinical measurements. A 3D discrete-element analysis algorithm was used to calculate the pre- and postoperative contact pressure profile within the hip. Radiological angles describing the coverage of the joint were measured using a computerized approach at actual and theoretical orientations of the acetabular cup. Quantitative results were compared using postoperative clinical evaluation scores (Harris score), and patient-completed outcome surveys (q-score) done at 2 and 10 years.Results The 3D mechanical analysis indicated that peak joint contact pressure was reduced by an average factor of 1.7 subsequent to PAO. Lateral coverage of the femoral head increased in all patients; however, it did not proportionally reduce the maximum contact pressure and, in 1 case, the pressure increased. This patient had the lowest 10-year q-score (70 out of 100) of the cohort. Another hip was converted to hip arthroplasty after 3 years because of increasing osteoarthritis.Interpretation The 3D analysis showed that a reduction in contact pressure was theoretically possible for all patients in this cohort, but this could not be achieved in every case during surgery. While intraoperative factors may affect the actual surgical outcome, the results show that 3D contact pressure analysis is consistent with traditional PAO planning techniques (more so than 2D analysis) and may be a valuable addition to preoperative planning and intraoperative assessment of joint realignment.
Machine learning-based approaches outperform competing methods in most disciplines relevant to diagnostic radiology. Interventional radiology, however, has not yet benefited substantially from the advent of deep learning, in particular because of two reasons: 1) Most images acquired during the procedure are never archived and are thus not available for learning, and 2) even if they were available, annotations would be a severe challenge due to the vast amounts of data. When considering fluoroscopy-guided procedures, an interesting alternative to true interventional fluoroscopy is in silico simulation of the procedure from 3D diagnostic CT. In this case, labeling is comparably easy and potentially readily available, yet, the appropriateness of resulting synthetic data is dependent on the forward model. In this work, we propose DeepDRR, a framework for fast and realistic simulation of fluoroscopy and digital radiography from CT scans, tightly integrated with the software platforms native to deep learning. We use machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, combined with analytic forward projection and noise injection to achieve the required performance. On the example of anatomical landmark detection in X-ray images of the pelvis, we demonstrate that machine learning models trained on DeepDRRs generalize to unseen clinically acquired data without the need for re-training or domain adaptation. Our results are promising and promote the establishment of machine learning in fluoroscopy-guided procedures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.