2021
DOI: 10.48550/arxiv.2111.02355
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Theoretical Analysis on Independence-driven Importance Weighting for Covariate-shift Generalization

Abstract: Covariate shift generalization, a typical case in out-of-distribution (OOD) generalization, requires a good performance on the unknown testing distribution, which varies from the accessible training distribution in the form of covariate shift. Recently, stable learning algorithms have shown empirical effectiveness to deal with covariate shift generalization on several learning models involving regression algorithms and deep neural networks. However, the theoretical explanations for such effectiveness are still… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Compared with unsupervised methods, approaches in this category incorporate supervised information to design various model architectures and corresponding learning strategies. Typical approaches include domain generalization methods [1167,1168,1169,1170,1171,1172], causal & invariant learning [1173,1174,1175,1176,1177,1178], and stable learning [1179,1180,1181,1182,1183,1184,1185].…”
Section: Robustnessmentioning
confidence: 99%
“…Compared with unsupervised methods, approaches in this category incorporate supervised information to design various model architectures and corresponding learning strategies. Typical approaches include domain generalization methods [1167,1168,1169,1170,1171,1172], causal & invariant learning [1173,1174,1175,1176,1177,1178], and stable learning [1179,1180,1181,1182,1183,1184,1185].…”
Section: Robustnessmentioning
confidence: 99%
“…With access to data from several source domains, Domain Generalization (DG) problems aim to learn models that generalize well on unseen target domains, which focuses mostly on computer vision related classification problems on the grounds that predictions are prone to be affected by disturbance on images (e.g., style, light, rotation, etc.). According to [72], regarding to different methodological focuses, DG methods can be categorized into three branches, namely representation learning [18,42,54,54,74], training strategy [7,40,76,86,98,99,102] and data augmentation [58,59,69,79,90,104]. Existing surveys of this field can be found in [72,80,103].…”
Section: Domain Generalizationmentioning
confidence: 99%