2019 IEEE International Ultrasonics Symposium (IUS) 2019
DOI: 10.1109/ultsym.2019.8926041
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Ultrasound Guidance Based on Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 5 publications
0
10
0
Order By: Relevance
“…However, in contrast with previous work [12,6,21] which uses simulations or phantoms, our proposed system is trained and evaluated on data from real-world routine ultrasound scanning. Moreover, instead of relying on the exact execution of the probe guidance as in previous work [12,6,21], our system reacts to the actual operator probe movements that are sensed with an IMU. This suggests that the system will perform well in future tests on volunteer subjects.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, in contrast with previous work [12,6,21] which uses simulations or phantoms, our proposed system is trained and evaluated on data from real-world routine ultrasound scanning. Moreover, instead of relying on the exact execution of the probe guidance as in previous work [12,6,21], our system reacts to the actual operator probe movements that are sensed with an IMU. This suggests that the system will perform well in future tests on volunteer subjects.…”
Section: Discussionmentioning
confidence: 99%
“…One study proposes an algorithm that learns to find a view of the adult heart in a grid of pre-acquired ultrasound images [12]. Moreover, learningbased systems have been proposed in which a robotic actuator finds predefined views of simple tissue phantoms [6] or a fetal US phantom [21]. However, a fetus in the mother's womb is a dynamic and highly variant object that can not be well-represented with static simulations or a phantom.…”
Section: Phantoms and Simulated Environmentsmentioning
confidence: 99%
“…Although many recent approaches have focused on developing smart ultrasound equipment that adds interpretative capabilities to existing systems, Milletari et al [ 85 ] applied reinforcement learning to guide inexperienced users in POCUS to obtain clinically relevant images of the anatomy of interest. Jarosik and Lewandowski [ 86 ] developed a software agent that easily adapts to new conditions and informs the user on how to obtain the optimal settings of the imaging system during the scanning.…”
Section: Improving Workflow Efficiencymentioning
confidence: 99%
“…However, complete and accurate expert demonstrations can be intractable or expensive to obtain in the clinical US scans. Jarosik et al [19] customized an RL agent to move a virtual probe in a simple and static toy environment, but the real-world probe navigation task is much more complicated and challenging due to the highly variable anatomy among patients. In [20], the researchers used RL to learn cardiac US probe navigation in a simulation environment built with spatially tracked US frames acquired by a sonographer on a grid covering the chest of the patient.…”
Section: Related Workmentioning
confidence: 99%
“…3) State transition under restrictions: If there are no restrictions, the probe pose can be updated according to the selected action as in the previous work [19][20] [21]. Here, we instead consider two requirements for the probe pose in real-world US scans: i) the contact between the probe and the patient surface should be maintained to ensure sufficient acoustic coupling, and ii) the tilt angle of the probe should be limited to ensure the comfort and safety of the patient.…”
Section: A Reinforcement Learning For Probe Navigationmentioning
confidence: 99%