Objectives Intraoperative localization and preservation of parathyroid glands (PGs) are challenging during thyroid surgery. A new noninvasive technique of combined near‐infrared PG autofluorescence detection and dye‐free imaging angiography that allows intraoperative feedback has recently been introduced. The objective of this study was to evaluate this technique in real‐time. Materials and Methods A pilot feasibility study of a portable imaging device in four patients who underwent either thyroid lobectomy or total thyroidectomy is presented. PG autofluorescence and vascularity/tissue perfusion were monitored using a real‐time screen display during the surgical procedure. Results Three lobectomies and one total thyroidectomy were performed. Among the nine PGs identified by the operating surgeon, eight PGs were confirmed using the autofluorescence device. Each PG was successfully determined to be either well‐perfused or devascularized, and devascularized PGs were autotransplanted. Conclusions The preliminary results suggest that the combination of PG autofluorescence detection and dye‐free angiography can potentially be used to assess PG function. With further validation studies, the effectiveness of this technique in clinical practice can be further delineated.
Early and precise detection of parathyroid glands (PGs) is a challenging problem in thyroidectomy due to their small size and similar appearance to surrounding tissues. Near‐infrared autofluorescence (NIRAF) has stimulated interest as a method to localize PGs. However, high incidence of false positives for PGs has been reported with this technique. We introduce a prototype equipped with a coaxial excitation light (785 nm) and a dual‐sensor to address the issue of false positives with the NIRAF technique. We test the clinical feasibility of our prototype in situ and ex vivo using sterile drapes on 10 human subjects. Video data (1287 images) of detected PGs were collected to train, validate and compare the performance for PG detection. We achieved a mean average precision of 94.7% and a 19.5‐millisecond processing time/detection. This feasibility study supports the effectiveness of the optical design and may open new doors for a deep learning‐based PG detection method.
Combining laser technology with robotic precision and accuracy promises to introduce significant advances in minimally invasive surgical interventions. Lasers have already become a widespread tool in numerous surgical applications. They are proposed as a replacement for traditional tools (i.e., scalpels and electrocautery devices) to minimize surgical trauma, decrease healing times, and reduce the risk of postoperative complications. Compared to other energy sources, laser energy is wavelength‐dependent, allowing for preferential energy absorption in specific tissue types. This potentially leads to minimizing damage to healthy tissue and increasing surgical outcomes control and quality. Merging robotic control with laser techniques can help physicians achieve more accurate laser aiming and pave the way to automatic control of laser–tissue interactions in closed loop. Herein, a review of the state‐of‐the‐art robotic systems for laser surgery is presented. The goals of this paper are to present recent contributions in advanced intelligent systems for robot‐assisted laser surgery, provide readers with a better understanding of laser optics and the physics of laser–tissue interactions, discuss clinical applications of lasers in surgery, and provide guidance for future systems design.
Modern laparoscopes are equipped with visible-light optical cameras that assist surgeons navigate human anatomy. However, as surgical procedures require precision, surgeons would benefit from auxiliary imaging technologies to reliably perform operations. To actualize this improvement, two cameras [near-infrared (NIR) and red-green-blue (RGB)] can be integrated into one housing module while maintaining centerpoint alignment and optimal image focus. We have designed a prototype that satisfies these requirements and features cameras that can be individually, translationally adjusted in the x, y, and z-directions. Tri-directional translation and tilt-angle fine-tuning allow the cameras to conform to the lens focal distance and ensure they capture the same visual field. To demonstrate the usefulness of this housing design, we characterize the specifications of optical alignment, field-of-views, and depth-of-focus and describe a custom fabricated snapshot imager for associated medical applications in real-time, intraoperative settings.Detailed info: The housing module consists of a casing module for each camera and a central cube that serves as an interface between the light-collection optics at the front of the cube and the two optical cameras. A dichroic filter, at 45-degrees, positioned within the cube transmits near-infrared wavelengths to the NIR camera at the back and reflects visible light to the RGB camera on the bottom. To improve image focus, the casing modules can be adjusted to move in and out of the cube and fine-tuned by varying the relative mounting screw tensions. Slots and spacers allow for calibration between the cameras and ensure they have the same centerpoint.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.