2017
DOI: 10.1007/978-3-319-66185-8_71
|View full text |Cite|
|
Sign up to set email alerts
|

Deep Learning for Sensorless 3D Freehand Ultrasound Imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 31 publications
(15 citation statements)
references
References 10 publications
0
15
0
Order By: Relevance
“…• First, the 3D location of each 2D ultrasound image is estimated, where external sensor tracking is required at either the training [15,16] or inference [4,5] stage, subject to errors caused by subjects' internal motion (e.g. fetal movement).…”
Section: Conventional 3d Reconstruction Approachesmentioning
confidence: 99%
“…• First, the 3D location of each 2D ultrasound image is estimated, where external sensor tracking is required at either the training [15,16] or inference [4,5] stage, subject to errors caused by subjects' internal motion (e.g. fetal movement).…”
Section: Conventional 3d Reconstruction Approachesmentioning
confidence: 99%
“…While trackerless approaches have not yet been explored in OI, such approaches were first explored more than 10 years ago in ultrasound [12], ,and subsequently improved by using speckle tracking or RF signal-correlation [1]. Recently, a learning-based method for sensorless 3D-US imaging was employed in a deep-learning framework for pose estimation from a series of US images [13]. From the same group, a registration-based panoramic compounding from 3D to panoramic data was presented, employing the well-known LC2 registration similarity metric for image alignment [11].…”
Section: Related Workmentioning
confidence: 99%
“…Most previous researches are based on the non-deep learning methods [10,12]. Prevost et al [15] pioneered the deep learning based estimation of relative motion between US images, and later they extended their works by introducing the extra optical flow [2] and integrating a sensor source [14]. Guo et al [3] proposed a deep contextual learning network for reconstruction, which utilizes 3D convolutions over US video clips for feature extraction.…”
Section: Introductionmentioning
confidence: 99%