2022
DOI: 10.48550/arxiv.2211.04867
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Trackerless freehand ultrasound with sequence modelling and auxiliary transformation over past and future frames

et al.

Abstract: Three-dimensional (3D) freehand ultrasound (US) reconstruction without a tracker can be advantageous over its two-dimensional or tracked counterparts in many clinical applications. In this paper, we propose to estimate 3D spatial transformation between US frames from both past and future 2D images, using feed-forward and recurrent neural networks (RNNs). With the temporally available frames, a further multitask learning algorithm is proposed to utilise a large number of auxiliary transformation-predicting task… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…The first represents the mean square error (MSE) between the estimated transformations (T (1,k+2) , T (k+2,2k+3) ) at each corner point of the frames and their respective ground truth. The second term represents the accumulation loss that aims at reducing the error of the volume reconstruction, the effectiveness of the accumulation loss have been proven in the literature [13].…”
Section: Loss Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…The first represents the mean square error (MSE) between the estimated transformations (T (1,k+2) , T (k+2,2k+3) ) at each corner point of the frames and their respective ground truth. The second term represents the accumulation loss that aims at reducing the error of the volume reconstruction, the effectiveness of the accumulation loss have been proven in the literature [13].…”
Section: Loss Functionmentioning
confidence: 99%
“…To enable a smooth 3D reconstruction, a case-wise correlation loss based on 3D CNN and Pearson correlation coefficient was proposed in [10,12]. Qi et al [13] leverages past and future frames to estimate the relative transformation between each pair of the sequence; they used the consistency loss proposed in [14]. Despite the success of these approaches, they still suffer significant cumulative drift errors and mainly focus on linear probe motions.…”
Section: Introductionmentioning
confidence: 99%