2022
DOI: 10.1609/aaai.v36i2.20068
|View full text |Cite
|
Sign up to set email alerts
|

Single-Domain Generalization in Medical Image Segmentation via Test-Time Adaptation from Shape Dictionary

Abstract: Domain generalization typically requires data from multiple source domains for model learning. However, such strong assumption may not always hold in practice, especially in medical field where the data sharing is highly concerned and sometimes prohibitive due to privacy issue. This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains. We p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…This general approach proposed in (Sun et al 2020) is easy to combine with other tasks and effective for them. Many researchers introduce the testtime training to the downstream tasks to improve the generalization of the model on the out-of-distribution test data (Han et al 2022;Wang et al 2020;Shin et al 2022;Liu et al 2022;Gandelsman et al 2022). Shin et al (Shin et al 2022) propose two complementary modules, intra-modal pseudolabel generation, and inter-modal pseudo-label refinement, to take full advantage of self-supervising signals provided by multi-modality.…”
Section: Test-time Training Strategymentioning
confidence: 99%
“…This general approach proposed in (Sun et al 2020) is easy to combine with other tasks and effective for them. Many researchers introduce the testtime training to the downstream tasks to improve the generalization of the model on the out-of-distribution test data (Han et al 2022;Wang et al 2020;Shin et al 2022;Liu et al 2022;Gandelsman et al 2022). Shin et al (Shin et al 2022) propose two complementary modules, intra-modal pseudolabel generation, and inter-modal pseudo-label refinement, to take full advantage of self-supervising signals provided by multi-modality.…”
Section: Test-time Training Strategymentioning
confidence: 99%
“…LDMI (Wang et al 2021b) propose a style-complement module to enhance the generalization power of the model by synthesizing images from diverse distributions that are complementary to the source ones. TASD (Liu et al 2022) present a novel approach to address the challenging single domain generalization problem for medical image segmentation, by explicitly exploiting the general semantic shape priors that are extractable from single-domain data and are generalizable across domains to assist domain generalization under the worst-case scenario. This particular line of research has shown promise in utilizing a single domain to achieve effective generalization, a factor that is particularly relevant when faced with limited data or the unavailability of multiple source domains.…”
Section: Related Workmentioning
confidence: 99%
“…DG methods include the use of various data augmentations [1], [19], self-supervised learning pre-training [20], and new training methods [21]. Closer to this work are DG methods based on the integration of anatomical prior [22], [23]. In those methods, atlas-based probabilities are fused using concatenation inside the deep neural network.…”
Section: Domain Generalizationmentioning
confidence: 99%
“…One study has reported such issues for fetal brain MRI segmentation [4]. Thus, we have evaluated the proposed trustworthy AI approach with four different backbone AI algorithms based on deep learning [1], [20], [22], [23] and a fallback algorithm consisting of a registration-based segmentation method [9]. Details of the backbone AI and fallback methods can be found in the appendixA.5A.6A.7.…”
Section: Evaluation On a Large Multi-center Datasetmentioning
confidence: 99%