2022
DOI: 10.1007/978-3-031-16446-0_7
|View full text |Cite
|
Sign up to set email alerts
|

ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…We also compared the performance of the ConvUNet-DIR with the current state-of-the-art supervised and unsupervised deep learning-based approaches reported in the literature utilized public brain MRI datasets. Our method (Dice = 0.980) outperformed Han et al [19] (Dice = 0.839), Martin et al [28] (Dice = 0.756), Wu et al [29] (Dice = 0.873), Meng et al [30] (Dice = 0.654), Kuang and Schmah [9] (Dice = 0.533), Mok and Chung [31] (Dice = 0.770), Huang et al [32] (Dice = 0.707), Xu et al [33] (Dice = 0.830), Dey et al [34] (Dice = 0.781), Fan et al [35] (Dice = 0.788), Liu et al [36] (Dice = 0.909), Chen et al [37] (Dice = 0.873), Wang et al [38] (Dice = 0.731) methods using unsupervised approach; and Zhu et al [39] (Dice = 0.637) method using supervised approach.…”
Section: Discussionmentioning
confidence: 99%
“…We also compared the performance of the ConvUNet-DIR with the current state-of-the-art supervised and unsupervised deep learning-based approaches reported in the literature utilized public brain MRI datasets. Our method (Dice = 0.980) outperformed Han et al [19] (Dice = 0.839), Martin et al [28] (Dice = 0.756), Wu et al [29] (Dice = 0.873), Meng et al [30] (Dice = 0.654), Kuang and Schmah [9] (Dice = 0.533), Mok and Chung [31] (Dice = 0.770), Huang et al [32] (Dice = 0.707), Xu et al [33] (Dice = 0.830), Dey et al [34] (Dice = 0.781), Fan et al [35] (Dice = 0.788), Liu et al [36] (Dice = 0.909), Chen et al [37] (Dice = 0.873), Wang et al [38] (Dice = 0.731) methods using unsupervised approach; and Zhu et al [39] (Dice = 0.637) method using supervised approach.…”
Section: Discussionmentioning
confidence: 99%
“…Similarly, feeding two different images was used in stereo depth estimation for surgery videos [25]. Additionally, contrastive learning was applied to the unsupervised multimodal MRI registration task as a representation learning approach [26].…”
Section: Background and Related Workmentioning
confidence: 99%
“…GLCNet [20] introduced a remote sensing semantic segmentation approach based on global style and local matching network, incorporating a matching contrastive loss to learn pixel-level information. ContraReg [21] achieved non-rigid multimodal image alignment by projecting the learned multiscale local patch features into the jointly learned inter-domain embedding domain. Li et al [22] proposed a template matching method based on contrastive learning, which increases the matching number at finer details and performs intensive learning at the pixel level.…”
Section: Introductionmentioning
confidence: 99%