Background: For many abdominal surgical interventions, laparotomy has gradually been replaced by laparoscopy, with numerous benefits for the patient in terms of post-operative recovery. However, during laparoscopy, the endoscope only provides a single viewpoint to the surgeon, leaving numerous blind-spots and opening the way to peri-operative adverse events. Alternative camera systems have been proposed, but many lack the requisite resolution/robustness for use during surgery or cannot provide real-time images. Here, we present the added value of the Enhanced Laparoscopic Vision System (ELViS) which overcomes these limitations and provides a broad view of the surgical field in addition to the usual high-resolution endoscope.Methods: Experienced laparoscopy surgeons performed several typical procedure steps on a live pig model. The time-to-completion for surgical exercises performed by conventional endoscopy and ELViSassisted surgery was measured. A debriefing interview following each operating session was conducted by an ergonomist, and a System Usability Scale (SUS) score was determined.Results: Proof of concept of ELVIS was achieved in an animal model with 7 expert surgeons without peroperative adverse events related to the surgical device. No differences were found in time-tocompletion. Mean SUS score was 74.7, classifying the usability of the ELViS as "good". During the debriefing interview, surgeons highlighted several situations where the ELViS provided a real advantage (such as during instrument insertion, exploration of the abdominal cavity or for orientation during close work), and also suggested avenues for improvement of the system.Conclusions: This first test of the ELViS prototype on a live animal model demonstrated its usability and provided promising and useful feedback for further development.
Perceiving and making sense of the surgical scene during Total Knee Arthroplasty (TKA) surgery is crucial for building assistance and decision support systems for surgeons and their team. However, the need for large volumes of annotated and structured data for AI-based methods hinders the development of such tools. We hereby present a study on the use of transfer learning to train deep neural networks with scarce annotated data to automatically detect bony areas on live images. We provide quantitative evaluation results on in-vivo data, captured during several TKA procedures. We hope that this work will facilitate further developments of smart surgical assistance tools for orthopaedic surgery.
We present a new strategy for RANSAC sampling named BetaSAC, in reference to the beta distribution. Our proposed sampler builds a hypothesis set incrementally, selecting data points conditional on the previous data selected for the set. Such a sampling is shown to provide more suitable samples in terms of inlier ratio but also of consistency and potential to lead to an accurate parameters estimation. The algorithm is presented as a general framework, easily implemented and able to exploit any kind of prior information on the potential of a sample. As with PROSAC, BetaSAC converges towards RANSAC in the worst case. The benefits of the method are demonstrated on the homography estimation problem.
In this paper we describe a new solution for constructing a model of a scene and its objects using various explorations of a single camera in an unknown environment. Object motion presents a difficult challenge to scene modeling. The proposed method combines metric localization and place recognition to detect and model objects without a priori knowledge and to incrementally extend a scene model by adding new places and objects. We demonstrate the quality of our approach with results from image sequences taken from two different scenes.
Progress in machine learning and artificial intelligence (AI) opens the way to the devel- opment of smart clinical-assistance systems and decision-support tools for the operating room (OR). Yet, before deploying these algorithms in the OR, assessment of their perfor- mances in real clinical conditions is necessary. Gathering intraoperative data for training and testing is hard, and robustness to the challenging conditions of the OR is not always demonstrated. In this paper we introduce a unique multi-patient dataset of images cap- tured during Total Knee Arthroplasty (TKA) surgery. We use this dataset to compare five deep learning-based image segmentation approaches and provide quantitative and qualita- tive results. We hope that this work will help bringing light on the performances of AI in a real surgical environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citationsācitations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright Ā© 2024 scite LLC. All rights reserved.
Made with š for researchers
Part of the Research Solutions Family.