Purpose Minimally invasive surgery (MIS) has become the standard for many surgical procedures as it minimizes trauma, reduces infection rates and shortens hospitalization. However, the manipulation of objects in the surgical workspace can be difficult due to the unintuitive handling of instruments and limited range of motion. Apart from the advantages of robot-assisted systems such as augmented view or improved dexterity, both robotic and MIS techniques introduce drawbacks such as limited haptic perception and their major reliance on visual perception. Methods In order to address the above-mentioned limitations, a perception study was conducted to investigate whether the transmission of intra-abdominal acoustic signals can potentially improve the perception during MIS. To investigate whether these acoustic signals can be used as a basis for further automated analysis, a large audio data set capturing the application of electrosurgery on different types of porcine tissue was acquired. A sliding window technique was applied to compute log-mel-spectrograms, which were fed to a pre-trained convolutional neural network for feature extraction. A fully connected layer was trained on the intermediate feature representation to classify instrument–tissue interaction. Results The perception study revealed that acoustic feedback has potential to improve the perception during MIS and to serve as a basis for further automated analysis. The proposed classification pipeline yielded excellent performance for four types of instrument–tissue interaction (muscle, fascia, liver and fatty tissue) and achieved top-1 accuracies of up to 89.9%. Moreover, our model is able to distinguish electrosurgical operation modes with an overall classification accuracy of 86.40%. Conclusion Our proof-of-principle indicates great application potential for guidance systems in MIS, such as controlled tissue resection. Supported by a pilot perception study with surgeons, we believe that utilizing audio signals as an additional information channel has great potential to improve the surgical performance and to partly compensate the loss of haptic feedback.
Background Digitalization affects almost every aspect of modern daily life, including a growing number of health care services along with telemedicine applications. Fifth-generation (5G) mobile communication technology has the potential to meet the requirements for this digitalized future with high bandwidths (10 GB/s), low latency (<1 ms), and high quality of service, enabling wireless real-time data transmission in telemedical emergency health care applications. Objective The aim of this study is the development and clinical evaluation of a 5G usability test framework enabling preclinical diagnostics with mobile ultrasound using 5G network technology. Methods A bidirectional audio-video data transmission between the ambulance car and hospital was established, combining both 5G-radio and -core network parts. Besides technical performance evaluations, a medical assessment of transferred ultrasound image quality and transmission latency was examined. Results Telemedical and clinical application properties of the ultrasound probe were rated 1 (very good) to 2 (good; on a 6 -point Likert scale rated by 20 survey participants). The 5G field test revealed an average end-to-end round trip latency of 10 milliseconds. The measured average throughput for the ultrasound image traffic was 4 Mbps and for the video stream 12 Mbps. Traffic saturation revealed a lower video quality and a slower video stream. Without core slicing, the throughput for the video application was reduced to 8 Mbps. The deployment of core network slicing facilitated quality and latency recovery. Conclusions Bidirectional data transmission between ambulance car and remote hospital site was successfully established through the 5G network, facilitating sending/receiving data and measurements from both applications (ultrasound unit and video streaming). Core slicing was implemented for a better user experience. Clinical evaluation of the telemedical transmission and applicability of the ultrasound probe was consistently positive.
Different components of the newly defined field of surgical data science have been under research at our groups for more than a decade now. In this paper, we describe our sensor-driven approaches to workflow recognition without the need for explicit models, and our current aim is to apply this knowledge to enable contextaware surgical assistance systems, such as a unified surgical display and robotic assistance systems. The methods we evaluated over time include dynamic time warping, hidden Markov models, random forests, and recently deep neural networks, specifically convolutional neural networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.