2017
DOI: 10.1007/s11548-017-1600-y
|View full text |Cite
|
Sign up to set email alerts
|

Temporal clustering of surgical activities in robot-assisted surgery

Abstract: PurposeMost evaluations of surgical workflow or surgeon skill use simple, descriptive statistics (e.g., time) across whole procedures, thereby deemphasizing critical steps and potentially obscuring critical inefficiencies or skill deficiencies. In this work, we examine off-line, temporal clustering methods that chunk training procedures into clinically relevant surgical tasks or steps during robot-assisted surgery.MethodsWe collected system kinematics and events data from nine surgeons performing five differen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
21
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 32 publications
(21 citation statements)
references
References 18 publications
0
21
0
Order By: Relevance
“…The most relevant prior work is [9], which encodes short windows of kinematics using denoising autoencoders, and which uses these representations to search a database using motion-based queries. Other unsupervised approaches include activity alignment under the assumption of identical structure [10] and activity segmentation using hand-crafted pipelines [6], structured probablistic models [14], and clustering [20].…”
Section: Introductionmentioning
confidence: 99%
“…The most relevant prior work is [9], which encodes short windows of kinematics using denoising autoencoders, and which uses these representations to search a database using motion-based queries. Other unsupervised approaches include activity alignment under the assumption of identical structure [10] and activity segmentation using hand-crafted pipelines [6], structured probablistic models [14], and clustering [20].…”
Section: Introductionmentioning
confidence: 99%
“…The recognition and segmentation of the robot's current action is one of the main pillars of the surgical state estimation process. Many models have been developed for the segmentation and recognition of finegrained surgical actions that last for a few seconds, such as cutting [5][6][7][8], as well as surgical phases that last for up to 10 minutes, such as bladder dissection [9][10][11]. The recognition of fine-grained surgical states is particularly challenging due to their short duration and frequent state transitions.…”
Section: Introductionmentioning
confidence: 99%
“…Lea et al measured two scene-based features in JIGSAWS as additional variables to the robot kinematics data in their Latent Convolutional Skip-Chain CRF (LC-SC-CRF) model [5]. Zia et al collected the robot kinematics and system events data from RAS to perform surgical phase recognition [10]. While these attempts have proven to improve the model accuracy, to the best of the authors' knowledge, there is yet to be a unified method that incorporates multiple data sources directly for fine-grained surgical state estimation.…”
Section: Introductionmentioning
confidence: 99%
“…The begin and end times of tasks or sub-tasks must be automatically identified from within the entire procedure because manual identification through post-operative video review is overly time consuming and not scalable. Machine learning algorithms have been used with promising initial results in laparoscopic [8,9,10,11] and robotic-assisted surgeries [12,13,14,15,16,17,18].…”
Section: Introductionmentioning
confidence: 99%