2021
DOI: 10.1109/access.2021.3135277
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Guidewire Navigation in Coronary Artery Phantom

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(36 citation statements)
references
References 30 publications
0
36
0
Order By: Relevance
“…The model evaluated via in-silico experiments achieved high tool-position accuracy with an RMS error of 0.003 mm. An advanced strategy published by Kweon et al [ 71 ] shows that imitation or reinforcement learning can be directly designed for autonomous navigation of endovascular tools. In addition, Karstensen et al [ 72 ] adopted deep deterministic policy gradients with hindsight experience replay for a learning-based control of guidewire navigation in a robot-assisted peripheral vascular-intervention study.…”
Section: Key Technologies and Application Areas Of Vascular Intervent...mentioning
confidence: 99%
“…The model evaluated via in-silico experiments achieved high tool-position accuracy with an RMS error of 0.003 mm. An advanced strategy published by Kweon et al [ 71 ] shows that imitation or reinforcement learning can be directly designed for autonomous navigation of endovascular tools. In addition, Karstensen et al [ 72 ] adopted deep deterministic policy gradients with hindsight experience replay for a learning-based control of guidewire navigation in a robot-assisted peripheral vascular-intervention study.…”
Section: Key Technologies and Application Areas Of Vascular Intervent...mentioning
confidence: 99%
“…As shown in Figure 1, 462 studies met the search criteria, and 21 full-text studies were assessed against the eligibility criteria. A total of 14 were identified for review (Rafii-Tari et al, 2013Chi et al, 2018aChi et al, ,b, 2020Behr et al, 2019;You et al, 2019;Zhao et al, 2019;Kweon et al, 2021;Meng et al, 2021Meng et al, , 2022Cho et al, 2022;Karstensen et al, 2022;Wang et al, 2022). The characteristics of the fourteen studies are listed in Table 2.…”
Section: Studiesmentioning
confidence: 99%
“…. RL methods RL was used in nine studies (9/14, 64%) with algorithms including A3C, DDPG, DQN, Dueling DQN, HER, PI 2 , PPO, and Rainbow (Chi et al, 2018a(Chi et al, , 2020Behr et al, 2019;You et al, 2019;Kweon et al, 2021;Meng et al, 2021Meng et al, , 2022Cho et al, 2022;Karstensen et al, 2022). Demonstrator data in some form (GAIL, Behavior Cloning, or HD) was used as a precursor in four of the studies (4/14, 29%) during training (LfD), in conjunction with other RL algorithms (Chi et al, 2018a;Behr et al, 2019;Kweon et al, 2021;Cho et al, 2022).…”
Section: Yolo Supervised Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Cho et al [43] created an image-based automatic control approach for controlling the guidewire. Kweon et al [44] applied deep RL for guidewire navigation in a coronary artery phantom. For motion control during robotic catheterisation, Omisore et al [22] used a sample-efficient deep RL with episodic policy transfer and a fully adaptive tuning technique.…”
Section: Introductionmentioning
confidence: 99%