2019
DOI: 10.1007/s11517-019-02002-0
|View full text |Cite
|
Sign up to set email alerts
|

A CNN-based prototype method of unstructured surgical state perception and navigation for an endovascular surgery robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
35
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 69 publications
(35 citation statements)
references
References 53 publications
0
35
0
Order By: Relevance
“…If the goal of robotic surgery is full automation, and all surgeons who use the complete surgical systems operating today are able to perform surgery without haptics, relying on visual cues alone, one has to wonder if the development of haptics is truly necessary. A CNN-based framework to autonomously control guide wires in vessels during angiography is currently being developed that will use haptics recorded during robotic angiography performed by a clinician to develop algorithms that can enable an autonomously functioning robot to successfully navigate vessels intra-luminally [ 109 ]. As with humans, the current models involve two-dimensional image interpretation with fluoroscopy being converted into three-dimensional robotic movements [ 110 , 111 ].…”
Section: Discussionmentioning
confidence: 99%
“…If the goal of robotic surgery is full automation, and all surgeons who use the complete surgical systems operating today are able to perform surgery without haptics, relying on visual cues alone, one has to wonder if the development of haptics is truly necessary. A CNN-based framework to autonomously control guide wires in vessels during angiography is currently being developed that will use haptics recorded during robotic angiography performed by a clinician to develop algorithms that can enable an autonomously functioning robot to successfully navigate vessels intra-luminally [ 109 ]. As with humans, the current models involve two-dimensional image interpretation with fluoroscopy being converted into three-dimensional robotic movements [ 110 , 111 ].…”
Section: Discussionmentioning
confidence: 99%
“…Considering that simple control operations are repeatedly performed to utilize a robot-assisted intervention system, deep RL may be a solution that effectively alleviates the burden of human operators. Recent applications for autonomous control of interventional devices in phantom simulation supported the potential applicability of deep RL (You et al, 2019;Behr et al, 2019;Karstensen et al, 2020;Chi et al, 2020;Zhao et al, 2019). In this study, we propose a deep RL framework for autonomous guidewire navigation in robot-assisted coronary interventions.…”
mentioning
confidence: 88%
“…Considering that simple control operations are repeatedly performed to utilize a robot-assisted intervention system, deep RL may be a solution that effectively alleviates the burden of human operators. Recent applications for autonomous control of interventional devices in phantom simulation supported the potential applicability of deep RL [18]- [22].…”
Section: Introductionmentioning
confidence: 98%