2023
DOI: 10.1609/aaai.v37i1.25138
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Domain Adaptation for Medical Image Segmentation by Selective Entropy Constraints and Adaptive Semantic Alignment

Abstract: Generalizing a deep learning model to new domains is crucial for computer-aided medical diagnosis systems. Most existing unsupervised domain adaptation methods have made significant progress in reducing the domain distribution gap through adversarial training. However, these methods may still produce overconfident but erroneous results on unseen target images. This paper proposes a new unsupervised domain adaptation framework for cross-modality medical image segmentation. Specifically, We first introduce two d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…There are UDA approaches dealing with histological images, i.e., for general whole-slide images [94] or specific to the classification of breast cancer images [95]. Further advancements bridging domains of different imaging modalities are made [96] by, e.g., style adaptation [97] or selective entropy constraints [98] where pixels are first categorized in reliable and unreliable before the domain shift. However, all these approaches rely on a well-labeled source domain.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…There are UDA approaches dealing with histological images, i.e., for general whole-slide images [94] or specific to the classification of breast cancer images [95]. Further advancements bridging domains of different imaging modalities are made [96] by, e.g., style adaptation [97] or selective entropy constraints [98] where pixels are first categorized in reliable and unreliable before the domain shift. However, all these approaches rely on a well-labeled source domain.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Unlike other methods, we consider two scenarios of labels and choose appropriate methods to optimize the model, rather than only selecting samples with high confidence for training [40]. Our selective minimax entropy objective L SM M E is given by:…”
Section: Selective Minmax Entropy Based On Feature Cluster Consistencymentioning
confidence: 99%