A major challenge during endovascular interventions is visualising the position and orientation of the catheter being inserted. This is typically achieved by intermittent X-ray imaging. Since the radiation exposure to the surgeon is considerable, it is desirable to reduce X-ray exposure to the bare minimum needed. Additionally, transferring two-dimensional (2D) X-ray images to 3D locations is challenging. The authors present the development of a real-time navigation framework, which allows a 3D holographic view of the vascular system without any need of radiation. They extract the patient's surface and vascular tree from pre-operative computed tomography data and register it to the patient using a magnetic tracking system. The system was evaluated on an anthropomorphic full-body phantom by experienced clinicians using a four-point questionnaire. The average score of the system (maximum of 20) was found to be 17.5. The authors’ approach shows great potential to improve the workflow for endovascular procedures, by simultaneously reducing X-ray exposure. It will also improve the learning curve and help novices to more quickly master the required skills.
IntroductionEndovascular aortic repair (EVAR) is a minimal-invasive technique that prevents life-threatening rupture in patients with aortic pathologies by implantation of an endoluminal stent graft. During the endovascular procedure, device navigation is currently performed by fluoroscopy in combination with digital subtraction angiography. This study presents the current iterative process of biomedical engineering within the disruptive interdisciplinary project Nav EVAR, which includes advanced navigation, image techniques and augmented reality with the aim of reducing side effects (namely radiation exposure and contrast agent administration) and optimising visualisation during EVAR procedures. This article describes the current prototype developed in this project and the experiments conducted to evaluate it.MethodsThe current approach of the Nav EVAR project is guiding EVAR interventions in real-time with an electromagnetic tracking system after attaching a sensor on the catheter tip and displaying this information on Microsoft HoloLens glasses. This augmented reality technology enables the visualisation of virtual objects superimposed on the real environment. These virtual objects include three-dimensional (3D) objects (namely 3D models of the skin and vascular structures) and two-dimensional (2D) objects [namely orthogonal views of computed tomography (CT) angiograms, 2D images of 3D vascular models, and 2D images of a new virtual angioscopy whose appearance of the vessel wall follows that shown in ex vivo and in vivo angioscopies]. Specific external markers were designed to be used as landmarks in the registration process to map the tracking data and radiological data into a common space. In addition, the use of real-time 3D ultrasound (US) is also under evaluation in the Nav EVAR project for guiding endovascular tools and updating navigation with intraoperative imaging. US volumes are streamed from the US system to HoloLens and visualised at a certain distance from the probe by tracking augmented reality markers. A human model torso that includes a 3D printed patient-specific aortic model was built to provide a realistic test environment for evaluation of technical components in the Nav EVAR project. The solutions presented in this study were tested by using an US training model and the aortic-aneurysm phantom.ResultsDuring the navigation of the catheter tip in the US training model, the 3D models of the phantom surface and vessels were visualised on HoloLens. In addition, a virtual angioscopy was also built from a CT scan of the aortic-aneurysm phantom. The external markers designed for this study were visible in the CT scan and the electromagnetically tracked pointer fitted in each marker hole. US volumes of the US training model were sent from the US system to HoloLens in order to display them, showing a latency of 259±86 ms (mean±standard deviation).ConclusionThe Nav EVAR project tackles the problem of radiation exposure and contrast agent administration during EVAR interventions by using a multidisciplinary approach to guide the endovascular tools. Its current state presents several limitations such as the rigid alignment between preoperative data and the simulated patient. Nevertheless, the techniques shown in this study in combination with fibre Bragg gratings and optical coherence tomography are a promising approach to overcome the problems of EVAR interventions.
Real-time volumetric (4D) ultrasound has shown high potential for diagnostic and therapy guidance tasks. One of the main drawbacks of ultrasound imaging to date is the reliance on manual probe positioning and the resulting user dependence. Robotic assistance could help overcome this issue and facilitate the acquisition of long-term image data to observe dynamic processes in vivo over time. The aim of this study is to assess the feasibility of robotic probe manipulation and organ motion quantification during extended imaging sessions. The system consists of a collaborative robot and a 4D ultrasound system providing real-time data access. Five healthy volunteers received liver and prostate scans during free breathing over 30 min. Initial probe placement was performed with real-time remote control with a predefined contact force of 10 N. During scan acquisition, the probe position was continuously adjusted to the body surface motion using impedance control. Ultrasound volumes, the pose of the end-effector and the estimated contact forces were recorded. For motion analysis, one anatomical landmark was manually annotated in a subset of ultrasound frames for each experiment. Probe contact was uninterrupted over the entire scan duration in all ten sessions. Organ drift and imaging artefacts were successfully compensated using remote control. The median contact force along the probe’s longitudinal axis was 10.0 N with maximum values of 13.2 and 21.3 N for liver and prostate, respectively. Forces exceeding 11 N only occurred in 0.3% of the time. Probe and landmark motion were more pronounced in the liver, with median interquartile ranges of 1.5 and 9.6 mm, compared to 0.6 and 2.7 mm in the prostate. The results show that robotic ultrasound imaging with dynamic force control can be used for stable, long-term imaging of anatomical regions affected by motion. The system facilitates the acquisition of 4D image data in vivo over extended scanning periods for the first time and holds the potential to be used for motion monitoring for therapy guidance as well as diagnostic tasks.
Purpose: To allow continuous acquisition of high quality 4D ultrasound images for non‐invasive live tracking of tumours for IGRT, image‐ and force‐adaptive strategies for robotised placement of 4D ultrasound probes are developed and evaluated. Methods: The developed robotised ultrasound system is based on a 6‐axes industrial robot (adept Viper s850) carrying a 4D ultrasound transducer with a mounted force‐torque sensor. The force‐adaptive placement strategies include probe position control using artificial potential fields and contact pressure regulation by a PD controller strategy. The basis for live target tracking is a continuous minimum contact pressure to ensure good image quality and high patient comfort. This contact pressure can be significantly disturbed by respiratory movements and has to be compensated. All measurements were performed on human subjects under realistic conditions. When performing cardiac ultrasound, rib‐ and lung shadows are a common source of interference and can disrupt the tracking. To ensure continuous tracking, these artefacts had to be detected to automatically realign the probe. The detection is realised by multiple algorithms based on entropy calculations as well as a determination of the image quality. Results: Through active contact pressure regulation it was possible to reduce the variance of the contact pressure by 89.79% despite respiratory motion of the chest. The results regarding the image processing clearly demonstrate the feasibility to detect image artefacts like rib shadows in real‐time. Conclusion: In all cases, it was possible to stabilise the image quality by active contact pressure control and automatically detected image artefacts. This fact enables the possibility to compensate for such interferences by realigning the probe and thus continuously optimising the ultrasound images. This is a huge step towards fully automated transducer positioning and opens the possibility for stable target tracking in ultrasoundguided radiation therapy requiring contact pressure of 5–10 N. This work was supported by the Graduate School for Computing in Medicine and Life Sciences funded by Germany's Excellence Initiative [DFG GSC 235/1].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.