2022
DOI: 10.1007/s10994-021-06080-w
|View full text |Cite
|
Sign up to set email alerts
|

On the benefits of representation regularization in invariance based domain generalization

Abstract: A crucial aspect of reliable machine learning is to design a deployable system for generalizing new related but unobserved environments. Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments. Previous approaches commonly incorporated learning the invariant representation for achieving good empirical performance. In this paper, we reveal that merely learning the invariant representation is vulnerable to the related unseen environment. To this end, we derive a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 13 publications
0
11
0
Order By: Relevance
“…Following these deficiencies, several works propose alternate objectives for achieving invariance (Krueger et al 2021;Bellot and van der Schaar 2020;Jin, Barzilay, and Jaakkola 2020;Ahuja et al 2021;Shui, Wang, and Gagné 2021). However, unlike previous works that aim to improve the invariance learning objective, we question whether invariance as a constraint can be improved upon for better performance.…”
Section: Related Workmentioning
confidence: 92%
“…Following these deficiencies, several works propose alternate objectives for achieving invariance (Krueger et al 2021;Bellot and van der Schaar 2020;Jin, Barzilay, and Jaakkola 2020;Ahuja et al 2021;Shui, Wang, and Gagné 2021). However, unlike previous works that aim to improve the invariance learning objective, we question whether invariance as a constraint can be improved upon for better performance.…”
Section: Related Workmentioning
confidence: 92%
“…Additionally, Oh et al 6 incorporated user priors regarding moving objects into the low-rank model and improved the performance. Inspired by the successes of deep learning models in numerous vision tasks, [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26] Yan et al 27 integrated spatial attention mechanisms into deep networks, which effectively mitigate misaligned content during HDR image reconstruction. However, motion removal-based methods, particularly in the presence of large-scale object motions in LDR images, tend to exclude a considerable number of pixels before merging the input LDR images.…”
Section: Related Work 21 Motion Removal-based Methodsmentioning
confidence: 99%
“…Lipschitz regularization. Lipschitz regularization is first used in the statistical regression problems (Wang, Du, and Shen 2013) and recently proved to have a better generalization guarantee Ma 2019, 2020) and be a sufficient condition for the smoothness of the representation function (Shui, Wang, and Gagné 2021) in the deep learning context. It penalizes the gradient of the output of the model corresponding to input features.…”
Section: Preliminariesmentioning
confidence: 99%