2022
DOI: 10.48550/arxiv.2203.05818
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ZIN: When and How to Learn Invariance by Environment Inference?

Abstract: It is commonplace to encounter heterogeneous data, of which some aspects of the data distribution may vary but the underlying causal mechanisms remain constant. When data are divided into distinct environments according to the heterogeneity, recent invariant learning methods have proposed to learn robust and invariant models based on this environment partition. It is hence tempting to utilize the inherent heterogeneity even when environment partition is not provided. Unfortunately, in this work, we show that l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 12 publications
1
6
0
Order By: Relevance
“…For our choice of learning rate, number of iterations and optimizer and annealing iterations, we refer to ((Lin, Zhu, and Cui 2022)). While the reported results were for λ = 10 2 , we verified similar trends for λ = 10 (Lin, Zhu, and Cui 2022) .…”
Section: Linear Regressionsupporting
confidence: 85%
See 1 more Smart Citation
“…For our choice of learning rate, number of iterations and optimizer and annealing iterations, we refer to ((Lin, Zhu, and Cui 2022)). While the reported results were for λ = 10 2 , we verified similar trends for λ = 10 (Lin, Zhu, and Cui 2022) .…”
Section: Linear Regressionsupporting
confidence: 85%
“…As pre-processing step, we drop all non-numerical features, dropping samples with missing values and normalizing each feature and the price label to zero mean, unit variance and the samples, {X i , y i } i ∈ (R 32 × R). Experiment Setup To adapt this task to OoD prediction, following (Lin, Zhu, and Cui 2022), we manually split the training data-set into 10-year segments and use the house year built as a meta-data for partitioning, with the intuition being that factors affecting house prices change over time with societal perceptions. For prediction, we consider a linear regression model for the task.…”
Section: Linear Regressionmentioning
confidence: 99%
“…Recent studies have addressed biases by learning invariances in training data. Motivated by casual discovery, IRM [3] and its variants [25,[30][31][32][33] learn a representation such that the optimal classifier built on top is the same for all training environments. LISA [34] also learns invariant predictors via selective mix-up augmentation across different environments.…”
Section: Related Workmentioning
confidence: 99%
“…In recent works, invariant learning is extended to the scenario without a priori environment labels, but with knowledge on spurious correlations in the training data [34; 8]. Such knowledge is proven to be necessary in this setting [18]. They are utilized to split the training data into groups, which are supposed to encode variations of spurious information so that they can be avoided by learning the invariance.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we need specific theoretical analysis under the setting of group-IL. In a recent work, Lin et al [18] derived the sufficient and necessary assumptions for their proposed algorithm. However, in this paper we focus on group criteria for general group-IL methods.…”
Section: Related Workmentioning
confidence: 99%