2022
DOI: 10.1016/j.compmedimag.2022.102069
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning-based framework for motion-compensated image fusion in catheterization procedures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…Therefore, the results presented might be considered as indication of the relevance of motioncompensation with further validation in larger cohorts still needed. The co-registration as well as identification of the respiratory phase was done manually and using automated approaches such as, e.g., proposed earlier [15,34,35] might further increase the resulting accuracy. Further, the nonavailability of XR runs in two angulations for the motion impact analysis and hence the projection of the marker displacement on the view axis might impact the absolute value of the differences between the analyzed data.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the results presented might be considered as indication of the relevance of motioncompensation with further validation in larger cohorts still needed. The co-registration as well as identification of the respiratory phase was done manually and using automated approaches such as, e.g., proposed earlier [15,34,35] might further increase the resulting accuracy. Further, the nonavailability of XR runs in two angulations for the motion impact analysis and hence the projection of the marker displacement on the view axis might impact the absolute value of the differences between the analyzed data.…”
Section: Discussionmentioning
confidence: 99%
“…Improved co-registration techniques have been presented for 3D-3D as well as for 2D-3D co-registration [12][13][14], and motion compensation approaches have been introduced mainly focusing on the compensation of respiratory motion [15][16][17]. As desired accuracy of an IGI system, a minimum of 5 mm has been reported for endovascular and cardiac procedures [18,19].…”
Section: Introductionmentioning
confidence: 99%
“…In terms of catheter or guidewire tracking for fluoroscopy images [5,[8][9][10][11][12][13][14], Vernikouskaya et al designed a convolutional neural network (CNN) with two channels, utilized to track the pacing of the catheter tip and considered a single-point tracking method [13]. Motion was identified based on the template matching of 2D fluoroscopic images.…”
Section: Introductionmentioning
confidence: 99%
“…Motion was identified based on the template matching of 2D fluoroscopic images. The training data were generated by tracing a rapid-pace catheter tip; however, heatmap plots indicated that the catheter tip was not of interest, spreading the attention of the CNN on the diaphragm [13]. In another study, researchers estimated an external force applied on the tip of a planar catheter through image processing algorithms based on cropping and edge detection, followed by a mathematical catheter representation.…”
Section: Introductionmentioning
confidence: 99%
“…In another work, researchers utilized a CNN to explore the possibility to detect motion between two fluoroscopic frames in catheterization procedures [69]. They were able to compare their CNN-based catheter tip detection with normalized cross correlation (CC) and found a mean absolute error (MAE) of 8.7 ± 2.5 pixels or 3.0 ± 0.9 mm between methods, with the CNN outperforming CC.…”
Section: Introductionmentioning
confidence: 99%