2023
DOI: 10.48550/arxiv.2302.02350
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Aggregation of Disentanglement: Reconsidering Domain Variations in Domain Generalization

Abstract: Domain Generalization (DG) is a fundamental challenge for machine learning models, which aims to improve model generalization on various domains. Previous methods focus on generating domain invariant features from various source domains. However, we argue that the domain variantions also contain useful information, i.e., classificationaware information, for downstream tasks, which has been largely ignored. Different from learning domain invariant features from source domains, we decouple the input images into … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 35 publications
(55 reference statements)
0
2
0
Order By: Relevance
“…A rich literature is dedicated to addressing the OOD generalization challenge, which often involves introducing additional regularizations to Empirical Risk Minimization (ERM) (Vapnik 1991) Besides, some methods are proposed to utilize domaindependent features for better OOD generalization ability, e.g., (Chattopadhyay, Balaji, and Hoffman 2020;Zhang et al 2021;Bui et al 2021;Zhang et al 2023). Yet, they still treat domain index as a discrete variable, and can not learn the evolving drift across the domains.…”
Section: Related Work Domain Generalizationmentioning
confidence: 99%
“…A rich literature is dedicated to addressing the OOD generalization challenge, which often involves introducing additional regularizations to Empirical Risk Minimization (ERM) (Vapnik 1991) Besides, some methods are proposed to utilize domaindependent features for better OOD generalization ability, e.g., (Chattopadhyay, Balaji, and Hoffman 2020;Zhang et al 2021;Bui et al 2021;Zhang et al 2023). Yet, they still treat domain index as a discrete variable, and can not learn the evolving drift across the domains.…”
Section: Related Work Domain Generalizationmentioning
confidence: 99%
“…Under the so-called Model-as-a-Service (MaaS) [24] scenario, users can adapt their data via zero-shot or black-box source-free domain adaptation. These techniques work well on language tasks but are inefficient in visual tasks because the distribution gap in vision tasks are large [35]. Even foundation vision models cannot generalize on such varieties of data.…”
Section: Introductionmentioning
confidence: 99%