Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.
The lack of force feedback is considered one of the major limitations in Robot Assisted Minimally Invasive Surgeries. Since add-on sensors are not a practical solution for clinical environments, in this paper we present a force estimation approach that starts with the reconstruction of a 3D deformation structure of the tissue surface by minimizing an energy functional. A Recurrent Neural Network-Long Short Term Memory (RNN-LSTM) based architecture is then presented to accurately estimate the applied forces. According to the results, our solution offers long-term stability and shows a significant percentage of accuracy improvement, ranging from about 54% to 78%, over existing approaches.
Abstract-This paper addresses the issue of lack of force feedback in robotic-assisted minimally invasive surgeries. Force is an important measure for surgeons in order to prevent intra-operative complications and tissue damage. Thus, an innovative neuro-vision based force estimation approach is proposed. Tissue surface displacement is first measured via minimization of an energy functional. A neuro approach is then used to establish a geometric-visual relation and estimate the applied force. The proposed approach eliminates the need of add-on sensors, carrying out biocompatibility studies and is applicable to tissues of any shape. Moreover, we provided an improvement from 15.14% to 56.16% over other approaches which demonstrate the potential of our proposal.
Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.
In computer-assisted beating heart surgeries, accurate tracking of the heart's motion is of huge importance and there is a continuous need to eliminate any source of error that might disturb the tracking process. One source of error is the specular reflection that appears on the glossy surface of the heart. In this paper, we propose a robust solution for the detection and removal of specular highlights. A hybrid color attributes and wavelet based edge projection approach is applied to accurately identify the affected regions. These regions are then recovered using a dynamic search-based inpainting with adaptive windowing. Experimental results demonstrate the precision and efficiency of the proposed method. Moreover, it has a real-time performance and can be generalized to various other applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.