2020
DOI: 10.1007/978-3-030-59710-8_40
|View full text |Cite
|
Sign up to set email alerts
|

Dual-Task Self-supervision for Cross-modality Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…Given data from multiple sources, Domain-Prediction-based methods find a harmonized data representation such that all information relating to the source domain of the image is removed [6,7,14]. This goal can be achieved by appending another head to the network, which acts as a domain classifier.…”
Section: Domain-prediction-based Methodsmentioning
confidence: 99%
“…Given data from multiple sources, Domain-Prediction-based methods find a harmonized data representation such that all information relating to the source domain of the image is removed [6,7,14]. This goal can be achieved by appending another head to the network, which acts as a domain classifier.…”
Section: Domain-prediction-based Methodsmentioning
confidence: 99%
“…The work in Ref. 49 showed that a hierarchical domain adaptation structure was very effective to adapt a U-shape model trained in one domain to another for segmentation. Inspired by their work, we develop a hierarchical adversarial learning structure for HRNet to compensate domain shift between our Datasets, T and B, for OCT speckle noise compression.…”
Section: Hierarchical Adversarial Learningmentioning
confidence: 99%
“…In domain adaptation approaches for medical image segmentation, a DL-based model trained with labeled dataset in a specific domain (e.g. normal CXR) is refined for different domain dataset in semi-supervised, self-supervised or unsupervised manners (Bai et al, 2017;Tang et al, 2019;Tarvainen and Valpola, 2017;Li et al, 2020a;Perone et al, 2019;Xue et al, 2020;Orbes-Arteaga et al, 2019;Li et al, 2020b;Chen et al, 2020). These approaches try to take advantages of learned features from the supervised learning in a specific domain, and distillate the knowledge to unsupervised learning tasks in unseen domains.…”
Section: Semi-supervised Learning Via Domain Adaptationmentioning
confidence: 99%
“…Recent emerging researches on self-supervised learning has brought large improvements in medical domain adaptation or image segmentation tasks, by promoting consistency between model outputs given a same input with different perturbations or by training an auxiliary proxy task (Xue et al, 2020;Orbes-Arteaga et al, 2019;Li et al, 2020b). In general, DL models trained with auxiliary self-supervised losses are proven to achieves better generalization capability as well as better primary task performance, especially when training with limited labeled dataset and abundant unlabeled dataset.…”
Section: Self-supervised Learningmentioning
confidence: 99%
See 1 more Smart Citation