Purpose Guidance and quality control in orthopedic surgery increasingly rely on intra-operative fluoroscopy using a mobile C-arm. The accurate acquisition of standardized and anatomy-specific projections is essential in this process. The corresponding iterative positioning of the C-arm is error prone and involves repeated manual acquisitions or even continuous fluoroscopy. To reduce time and radiation exposure for patients and clinical staff and to avoid errors in fracture reduction or implant placement, we aim at guiding—and in the long-run automating—this procedure. Methods In contrast to the state of the art, we tackle this inherently ill-posed problem without requiring patient-individual prior information like preoperative computed tomography (CT) scans, without the need of registration and without requiring additional technical equipment besides the projection images themselves. We propose learning the necessary anatomical hints for efficient C-arm positioning from in silico simulations, leveraging masses of 3D CTs. Specifically, we propose a convolutional neural network regression model that predicts 5 degrees of freedom pose updates directly from a first X-ray image. The method is generalizable to different anatomical regions and standard projections. Results Quantitative and qualitative validation was performed for two clinical applications involving two highly dissimilar anatomies, namely the lumbar spine and the proximal femur. Starting from one initial projection, the mean absolute pose error to the desired standard pose is iteratively reduced across different anatomy-specific standard projections. Acquisitions of both hip joints on 4 cadavers allowed for an evaluation on clinical data, demonstrating that the approach generalizes without retraining. Conclusion Overall, the results suggest the feasibility of an efficient deep learning-based automated positioning procedure, which is trained on simulations. Our proposed 2-stage approach for C-arm positioning significantly improves accuracy on synthetic images. In addition, we demonstrated that learning based on simulations translates to acceptable performance on real X-rays.
X-ray based measurement and guidance are commonly used tools in orthopaedic surgery to facilitate a minimally invasive workflow. Typically, a surgical planning is first performed using knowledge of bone morphology and anatomical landmarks. Information about bone location then serves as a prior for registration during overlay of the planning on intra-operative X-ray images. Performing these steps manually however is prone to intra-rater/inter-rater variability and increases task complexity for the surgeon. To remedy these issues, we propose an automatic framework for planning and subsequent overlay. We evaluate it on the example of femoral drill site planning for medial patellofemoral ligament reconstruction surgery. A deep multi-task stacked hourglass network is trained on 149 conventional lateral X-ray images to jointly localize two femoral landmarks, to predict a region of interest for the posterior femoral cortex tangent line, and to perform semantic segmentation of the femur, patella, tibia, and fibula with adaptive task complexity weighting. On 38 clinical test images the framework achieves a median localization error of 1.50 mm for the femoral drill site and mean IOU scores of 0.99, 0.97, 0.98, and 0.96 for the femur, patella, tibia, and fibula respectively. The demonstrated approach consistently performs surgical planning at expert-level precision without the need for manual correction.
Purpose Reduction and osteosynthesis of ankle fractures is a challenging surgical procedure when it comes to the verification of the reduction result. Evaluation is conducted using intra-operative imaging of the injured ankle and depends on the expertise of the surgeon. Studies suggest that intra-individual variance of the ankle bone shape and pose is considerably lower than the inter-individual variance. It stands to reason that the information gain from the healthy contralateral side can help to improve the evaluation. Method In this paper, an assistance system is proposed that provides a side-to-side view of the two ankle joints for visual comparison and instant evaluation using only one 3D C-arm image. Two convolutional neural networks (CNN) are employed to extract the relevant image regions and pose information of each ankle so that they can be aligned with each other. A first U-Net uses a sliding window to predict the location of each ankle. The standard plane estimation is formulated as segmentation problem so that a second U-Net predicts the three viewing planes for alignment. Results Experiments were conducted to assess the accuracy of the individual steps on 218 unilateral ankle datasets as well as the overall performance on 7 bilateral ankle datasets. The experiments on unilateral ankles yield a median position-to-plane error of $$0.73\pm 1.36$$ 0.73 ± 1.36 mm and a median angular error between 2.98$$^\circ $$ ∘ and 3.71$$^\circ $$ ∘ for the plane normals. Conclusion Standard plane estimation via segmentation outperforms direct pose regression. Furthermore, the complete pipeline was evaluated including ankle detection and subsequent plane estimation on bilateral datasets. The proposed pipeline enables a direct contralateral side comparison without additional radiation. This has the potential to ease and improve the intra-operative evaluation for the surgeons in the future and reduce the need for revision surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.