2019
DOI: 10.1007/978-3-030-32692-0_65
|View full text |Cite
|
Sign up to set email alerts
|

Deep Residual Learning for Instrument Segmentation in Robotic Surgery

Abstract: Detection, tracking, and pose estimation of surgical instruments are crucial tasks for computer assistance during minimally invasive robotic surgery. In the majority of cases, the first step is the automatic segmentation of surgical tools. Prior work has focused on binary segmentation, where the objective is to label every pixel in an image as tool or background. We improve upon previous work in two major ways. First, we leverage recent techniques such as deep residual learning and dilated convolutions to adva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
80
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 89 publications
(80 citation statements)
references
References 18 publications
0
80
0
Order By: Relevance
“…For the binary segmentation task the best results is achieved by TernausNet-16 providing IoU = 0.836 and Dice = 0.901. These values are the best reported in the literature up to now [7, 15]. Next, we consider multi-class segmentation of different parts of instruments.…”
Section: Resultsmentioning
confidence: 88%
See 1 more Smart Citation
“…For the binary segmentation task the best results is achieved by TernausNet-16 providing IoU = 0.836 and Dice = 0.901. These values are the best reported in the literature up to now [7, 15]. Next, we consider multi-class segmentation of different parts of instruments.…”
Section: Resultsmentioning
confidence: 88%
“…In the domain of medical imaging, convolutional neural networks (CNN) have been successfully used, for example, for breast cancer histology image analysis [17], bone disease prediction [21] and age assessment [10], and other problems [5]. Previous deep learning-based applications to robotic instrument segmentation have demonstrated competitive performance in binary segmentation [1, 7] and promising results in multi-class segmentation [15].…”
Section: Introductionmentioning
confidence: 99%
“…However both surgical robotics and robotic imaging will play increasingly crucial roles in the years to come. Machine learning is demonstrating convincing results in real-time tool tracking [118], [172]- [174]. This for example enables automatic positioning of intra-operative OCT imaging planes within surgical microscopy for ophthalmic surgery [119], [175].…”
Section: Discussionmentioning
confidence: 96%
“…As specified in the guidelines, we preformed a cross-validation by leaving one surgery out of the training data. The segmentation result was compared to generic methods as well as algorithms that were explicitly published for this task [14,7,18]. Garcia P. Herrera et al [7] proposed a Fully convolutional network for segmentation in minimally invasive surgery.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…To achieve real-time performance, they applied the network only on every couple of frames and propagated the information with optical flow (FCN+OF). DLR [18] represents a deep residual network with dilated convolutions. Laina and Rieke et al [14] suggested a unified deep learning approach for simultaneous segmentation and 2D pose estimation using Fully Convolutional Residual Network with skip connections.…”
Section: Experimental Evaluationmentioning
confidence: 99%