2021
DOI: 10.48550/arxiv.2101.00318
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Subtype-aware Unsupervised Domain Adaptation for Medical Diagnosis

Abstract: Recent advances in unsupervised domain adaptation (UDA) show that transferable prototypical learning presents a powerful means for class conditional alignment, which encourages the closeness of cross-domain class centroids. However, the cross-domain inner-class compactness and the underlying fine-grained subtype structure remained largely underexplored. In this work, we propose to adaptively carry out the fine-grained subtype-aware alignment by explicitly enforcing the class-wise separation and subtype-wise … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…General UDA approaches [46,51,69,172] necessitate labeled samples from the source domain and unlabeled samples from the target domain for model training.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…General UDA approaches [46,51,69,172] necessitate labeled samples from the source domain and unlabeled samples from the target domain for model training.…”
Section: Introductionmentioning
confidence: 99%
“…This discrepancy causes the performance of a network trained on one domain (source) to deteriorate when applied to data from another domain (target). Unsupervised Domain Adaptation (UDA) [46,51,69,172,190,191] seeks to minimize domain shift between source and target data, while avoiding the need for expensive pixel-level annotations. The source domain data is crucial in allowing the model to preserve valuable knowledge and iteratively reduce crossdomain differences during adaptation.…”
Section: Introductionmentioning
confidence: 99%