2020
DOI: 10.1007/978-3-030-60334-2_10
|View full text |Cite
|
Sign up to set email alerts
|

Localizing 2D Ultrasound Probe from Ultrasound Image Sequences Using Deep Learning for Volume Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…The encoder's chief purpose is to estimate the position per pixel and then concatenate the acquired feature map with a relevant part in the decoder. In subsequent studies, FlowNet has been used to register and fuse images [46][47][48]. However, as the network in a DNN deepens, gradient and information tend to vanish from output to input, and vice versa.…”
Section: Applied Methodsmentioning
confidence: 99%
“…The encoder's chief purpose is to estimate the position per pixel and then concatenate the acquired feature map with a relevant part in the decoder. In subsequent studies, FlowNet has been used to register and fuse images [46][47][48]. However, as the network in a DNN deepens, gradient and information tend to vanish from output to input, and vice versa.…”
Section: Applied Methodsmentioning
confidence: 99%
“…One study [14] proposed a dual-branch balanced feature extraction network to make the model equally sensitive to both the frame and volume information. For volume reconstruction problems [27], [28], the addition of very local spatial context is often used through the addition of the previous slice as input. It contributes to the relative localization between successive slices.…”
Section: B 2d/3d Deep Registration Strategies For Real-time Navigatio...mentioning
confidence: 99%
“…Net (with T prior = T t−1 ) 11.72 ± 6 28. 5.06 ±5.70 8.87 ± 5.82 5.01 ± 7.82 5.80 ± 7.00 7.30 ± 7.25 7.28 ± 8.50 Without network (only T prior ) 7.70 ± 6.24 9.65 ± 6.11 6.35 ± 6.88 5.91 ± 6.77 9.39 ± 7.65 7.63 ± 7.67 Scenario Im+PT SpT-Net (with T prior = T t−1 ) 11.72 ± 6.28 2.68 ± 1.49 2.81 ± 2.00 5.28 ± 6.30 3.97 ± 2.03 3.15 ± 2.17 5.91 ± 7.19 Without network (only T prior * T P T ) 3.11 ± 2.67 3.15 ± 2.94 5.23 ± 6.32 4.28 ± 2.93 3.53 ± 3.10 5.84 ± 7.22…”
mentioning
confidence: 99%
“…FlowNet and densely connected networks were used in [11,21]. In [12], ResNet and FlowNetS were integrated for a better localization and optical flow estimation, and consistency loss derived from stereo vision was added. Forward consistency loss was then proposed in [14].…”
Section: Introductionmentioning
confidence: 99%