The blood−brain barrier (BBB) is a prime focus for clinicians to maintain the homeostatic function in health and deliver the theranostics in brain cancer and number of neurological diseases. The structural hierarchy and in situ biochemical signaling of BBB neurovascular unit have been primary targets to recapitulate into the in vitro modules. The microengineered perfusion systems and development in 3D cellular and organoid culture have given a major thrust to BBB research for neuropharmacology. In this review, we focus on revisiting the nanoparticles based bimolecular engineering to enable them to maneuver, control, target, and deliver the theranostic payloads across cellular BBB as nanorobots or nanobots. Subsequently we provide a brief outline of specific case studies addressing the payload delivery in brain tumor and neurological disorders (e.g., Alzheimer's disease, Parkinson's disease, multiple sclerosis, etc.). In addition, we also address the opportunities and challenges across the nanorobots' development and design. Finally, we address how computationally powered machine learning (ML) tools and artificial intelligence (AI) can be partnered with robotics to predict and design the next generation nanorobots to interact and deliver across the BBB without causing damage, toxicity, or malfunctions. The content of this review could be references to multidisciplinary science to clinicians, roboticists, chemists, and bioengineers involved in cutting-edge pharmaceutical design and BBB research.
A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.
Abstract-In image-guided robotic surgery, labeling and segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information can provide surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a very challenging problem due to a variety of complications including significant noise and clutter attributed to bleeding and smoke from cutting, poor color and texture contrast between different tissue types, occluding surgical tools, and limited (surface) visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique in several scenarios: synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness.
Hilar dissection is an important and delicate stage in partial nephrectomy, during which surgeons remove connective tissue surrounding renal vasculature. Serious complications arise when the occluded blood vessels, concealed by fat, are missed in the endoscopic view and as a result are not appropriately clamped.Such complications may include catastrophic blood loss from internal bleeding and associated occlusion of the surgical view during the excision of the cancerous mass (due to heavy bleeding), both of which may compromise the visibility of surgical margins or even result in a conversion from a minimally invasive to an open intervention. To aid in vessel discovery, we propose a novel automatic method to segment occluded vasculature from labeling minute pulsatile motion that is otherwise imperceptible with the naked eye. Our segmentation technique extracts subtle tissue motions using a technique adapted from phase-based video magnification, in which we measure motion from periodic changes in local phase information albeit for labeling rather than magnification. Based on measuring local phase through spatial decomposition of each frame of the endoscopic video using complex wavelet pairs, our approach assigns segmentation labels by detecting regions exhibiting temporal local phase changes matching the heart rate. We * Corresponding author Email address: alborza@ece.ubc.ca (Alborz Amir-Khalili)
Cerebral aneurysm is a weakness in a blood vessel that may enlarge and bleed into the surrounding area, which is a life-threatening condition. Therefore, early and accurate diagnosis of aneurysm is highly required to help doctors to decide the right treatment. This work aims to implement a real-time automated segmentation technique for cerebral aneurysm on the Zynq system-on-chip (SoC), and virtualise the results on a 3D plane, utilizing virtual reality (VR) facilities, such as Oculus Rift, to create an interactive environment for training purposes. The segmentation algorithm is designed based on hard thresholding and Haar wavelet transformation. The system is tested on six subjects, for each consists 512 × 512 DICOM slices, of 16 bits 3D rotational angiography. The quantitative and subjective evaluation show that the segmented masks and 3D generated volumes have admitted results. In addition, the hardware implement results show that the proposed implementation is capable to process an image using Zynq SoC in an average time of 5.2 ms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.