2016
DOI: 10.1007/978-3-319-46720-7_49
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Online Adaption for Robust Instrument Tracking and Pose Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 15 publications
0
17
0
Order By: Relevance
“…Image-based surgical instrument detection and tracking is attractive because it relies purely on equipment already in the operating theatre [4] . Likewise pose estimation from images has been shown to be feasible in different specialisations, such as retinal microsurgery [5] [7] , neurosurgery [8] and MIS [9] [11] . While both detection and tracking are difficult, pose estimation presents additional challenges due to the complex articulation structure.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Image-based surgical instrument detection and tracking is attractive because it relies purely on equipment already in the operating theatre [4] . Likewise pose estimation from images has been shown to be feasible in different specialisations, such as retinal microsurgery [5] [7] , neurosurgery [8] and MIS [9] [11] . While both detection and tracking are difficult, pose estimation presents additional challenges due to the complex articulation structure.…”
Section: Introductionmentioning
confidence: 99%
“…While both detection and tracking are difficult, pose estimation presents additional challenges due to the complex articulation structure. Most image-based methods [7] , [11] often extract low-level visual features from keypoints or regions to learn offline or online part appearance templates by using machine learning algorithms. Such low-level feature representations usually suffer from a lack of semantic interpretation, which means they cannot capture the high level category appearance.…”
Section: Introductionmentioning
confidence: 99%
“…The top-down feature fusion path is constructed by laterally connecting multi-layers of the backbone. The top layer 5 P of the top-down path is generated from the corresponding output 5 C with a 1×1 convolutional. A 2×2 deconvolutional layer with a stride of 2 is applied to 5 P , which outputs a feature map that has the same size as 4 C .…”
Section: Tools Detection and Bounding Box Regressionmentioning
confidence: 99%
“…Earlier surgical tool detection methods mostly extract low-level visual features [5], [6], such as color, gradient, and texture. However, these methods are inefficient and error prone that small or large tools may be overlooked.…”
Section: Introductionmentioning
confidence: 99%
“…4. RM dataset: Comparison to FPBC [23], POSE [3] and Online Adaption [9], measured by the metric KBB. The charts (a) to (c) show the accuracy for the left tip, right tip and center joint, respectively, for the Half Split experiment.…”
Section: Evaluation Of Modeling Strategiesmentioning
confidence: 99%