2022
DOI: 10.48550/arxiv.2206.02288
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ACT: Semi-supervised Domain-adaptive Medical Image Segmentation with Asymmetric Co-Training

Abstract: Unsupervised domain adaptation (UDA) has been vastly explored to alleviate domain shifts between source and target domains, by applying a well-performed model in an unlabeled target domain via supervision of a labeled source domain. Recent literature, however, has indicated that the performance is still far from satisfactory in the presence of significant domain shifts. Nonetheless, delineating a few target samples is usually manageable and particularly worthwhile, due to the substantial performance gain. Insp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…The multi modalities which are collaborative and complementary could encourage better modalityindependent representation learning. Liu et al [76] present a co-training framework for domain-adaptive medical image segmentation. This framework contains two segmentors used for semi-supervised segmentation task (labeled and unlabeled target domain data as inputs) and unsupervised domain adaptation task (labeled source domain data and unlabeled target domain data as inputs), respectively.…”
Section: Unsupervised Regularization With Co-trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…The multi modalities which are collaborative and complementary could encourage better modalityindependent representation learning. Liu et al [76] present a co-training framework for domain-adaptive medical image segmentation. This framework contains two segmentors used for semi-supervised segmentation task (labeled and unlabeled target domain data as inputs) and unsupervised domain adaptation task (labeled source domain data and unlabeled target domain data as inputs), respectively.…”
Section: Unsupervised Regularization With Co-trainingmentioning
confidence: 99%
“…The self-paced strategy can encourage the network to transfer the knowledge of easier-to-segment regions to the harder ones gradually through minimizing a generalized Jensen-Shannon Divergence. Another way to alleviate the influence from noisy pseudo labels is through exponential mix-up decay to adjust the contribution of the supervision signals from both labels and pseudo labels across the training process [76].…”
Section: Unsupervised Regularization With Co-trainingmentioning
confidence: 99%