2021
DOI: 10.48550/arxiv.2102.13128
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Online Learning Approach to Interpolation and Extrapolation in Domain Generalization

Abstract: A popular assumption for out-of-distribution generalization is that the training data comprises subdatasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them-this objective is broadly known as domain generalization. A common belief is that ERM can interpolate but not extrapolate and that the latter is considerably more difficult, but these claims are vague and lack formal justification. In this work, we recast generalization over sub-g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…In recent works, [Rosenfeld et al, 2021b, Gulrajani and Lopez-Paz, 2021, Rosenfeld et al, 2021a, Kamath et al, 2021, different limitations of invariance based approaches in addressing OOD generalization failures were highlighted. In [Rosenfeld et al, 2021b], the authors showed that if we use the IRMv1 objective, then for non-linear tasks the solutions from IRMv1 are no better than ERM in generalizing OOD.…”
Section: Invariance Principles In Ood Generalizationmentioning
confidence: 99%
“…In recent works, [Rosenfeld et al, 2021b, Gulrajani and Lopez-Paz, 2021, Rosenfeld et al, 2021a, Kamath et al, 2021, different limitations of invariance based approaches in addressing OOD generalization failures were highlighted. In [Rosenfeld et al, 2021b], the authors showed that if we use the IRMv1 objective, then for non-linear tasks the solutions from IRMv1 are no better than ERM in generalizing OOD.…”
Section: Invariance Principles In Ood Generalizationmentioning
confidence: 99%
“…[2019] give a generalization bound for distributions with sufficiently small H-divergence, while Rosenfeld et al [2021a] model domain generalization as an online game, showing that generalizing beyond the convex hull is NP-hard.…”
Section: Additional Related Workmentioning
confidence: 99%
“…The DG problem setting was first analysed in (Blanchard et al, 2011). Since then there have been some attempts to analyse DG algorithms from a generalisation bound perspective (Muandet et al, 2013;Blanchard et al, 2021;Hu et al, 2020;Albuquerque et al, 2020;Rosenfeld et al, 2021). However these studies have theoretical results that are either restricted to specific model classes, such as kernel machines, or make strong assumptions about how the domains seen during training will resemble those seen at test time-e.g., that all domains are convex combinations of a finite predetermined set of prototypical domains.…”
Section: Related Work Theoretical Analysis Of the Dg Setting And Algo...mentioning
confidence: 99%
“…However, we also provide a means to transform any bound on the expected (or "average-case") risk to a high-confidence bound on the worst-case risk. Rosenfeld et al (2021) is another piece of work that theoretically investigates the generalisation of ERM in a DG setting. They deal with online DG, where each time-step corresponds to observing a new domain, and the learner must produce a new model capable of generalising to novel domains.…”
Section: Related Work Theoretical Analysis Of the Dg Setting And Algo...mentioning
confidence: 99%
See 1 more Smart Citation