2022
DOI: 10.1007/978-3-031-19806-9_1
|View full text |Cite
|
Sign up to set email alerts
|

Cross-domain Ensemble Distillation for Domain Generalization

Abstract: Domain generalization is the task of learning models that generalize to unseen target domains. We propose a simple yet effective method for domain generalization, named cross-domain ensemble distillation (XDED), that learns domain-invariant features while encouraging the model to converge to flat minima, which recently turned out to be a sufficient condition for domain generalization. To this end, our method generates an ensemble of the output logits from training data with the same label but from different do… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(3 citation statements)
references
References 60 publications
0
3
0
Order By: Relevance
“…Meta-learning optimizes a general domain-agnostic model, which turns into a domain-specific version with a few of the domain-specific samples for the test adaptation (Shu et al 2021;Chen et al 2023). Ensemble approaches integrated submodels with regards to diverse training domains to generalize unseen distributions (Lee, Kim, and Kwak 2022;Chu et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…Meta-learning optimizes a general domain-agnostic model, which turns into a domain-specific version with a few of the domain-specific samples for the test adaptation (Shu et al 2021;Chen et al 2023). Ensemble approaches integrated submodels with regards to diverse training domains to generalize unseen distributions (Lee, Kim, and Kwak 2022;Chu et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…Given this interesting setting and the promises of domain generalization in studying machine learning robustness, the community has developed a torrent of methods. Most of the existing methods fall into two categories: one is to build explicit regularization that pushes a model to learn representations that are invariant to the "style" across these domains [54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72]; the other one is to perform data augmentation that can introduce more diverse data to enrich the data of certain "semantic" information with the "style" from other domains [73][74][75][76][77][78][79][80], and also aims to train a model that is invariant to these "styles". More recently, there has been a line of approaches that aims to distill knowledge from pre-trained models into a smaller model to improve generalization performance [81][82][83][84][85].…”
Section: Domain Generalizationmentioning
confidence: 99%
“…Compared to domain adaptation (DA), it is more practical but also more challenging since the target images do not participate in training. The mainstream approaches generally can be divided into the following categories: domain-invariant representation learning [20][21][22], data augmentation [23][24][25], meta-learning [26][27][28], and ensemble learning [29][30][31]. For domaininvariant representation learning, adversarial learning [32] and minimize maximum mean discrepancy (MMD) [33] are the more popular strategies for the extraction of invariant information.…”
Section: Domain Generalizationmentioning
confidence: 99%