2021
DOI: 10.48550/arxiv.2110.09940
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Representations that Support Robust Transfer of Predictors

Abstract: Ensuring generalization to unseen environments remains a challenge. Domain shift can lead to substantially degraded performance unless shifts are wellexercised within the available training environments. We introduce a simple robust estimation criterion -transfer risk -that is specifically geared towards optimizing transfer to new environments. Effectively, the criterion amounts to finding a representation that minimizes the risk of applying any optimal predictor trained on one environment to another. The tran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…Arjovsky et al [2019] propose invariant risk minimization (IRM) to capture invariant correlations by learning representations that elicit an optimal invariant predictor across multiple training environments. Ahuja et al [2020], Jin et al [2020], Krueger et al [2021], Xie et al [2020] further develop several variants of IRM by introducing game theory, regret minimization, variance penalization, etc., and Xu and Jaakkola [2021], Chang et al [2020], Lin et al [2021] try to learn invariant features by coupled adversarial neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…Arjovsky et al [2019] propose invariant risk minimization (IRM) to capture invariant correlations by learning representations that elicit an optimal invariant predictor across multiple training environments. Ahuja et al [2020], Jin et al [2020], Krueger et al [2021], Xie et al [2020] further develop several variants of IRM by introducing game theory, regret minimization, variance penalization, etc., and Xu and Jaakkola [2021], Chang et al [2020], Lin et al [2021] try to learn invariant features by coupled adversarial neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…For context biased datasets, we compare with Rebias [4], End [57], LfF [41], and Feat-Aug [27]. For domain gap dataset (DG task), we compare with domainlabel based methods, such as DANN [1], fish [53], and TRM [67], as well as domain-label free methods, such as RSC [21] and StableNet [70]. As we claimed at the end of Section 3.1, we train all models from scratch.…”
Section: Datasetmentioning
confidence: 99%
“…The difficulty is finding the characteristic points of the new task. Xu et al [81] introduced a simple, robust estimation criterion-transfer risk-specifically geared towards optimizing transfer to new environments. The criterion amounts to finding a representation that minimizes the risk of applying any optimal predictor trained on one environment to another.…”
Section: Data Migrationmentioning
confidence: 99%