2019
DOI: 10.48550/arxiv.1906.03950
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Domain-Specific Batch Normalization for Unsupervised Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…DARE is closer to methods which learn separate batchnorm parameters per domain over a deep network, possibly adjusting at test-time (Seo et al, 2019;Chang et al, 2019;Segù et al, 2020)-these methods perform well, but they are entirely heuristic based, difficult to optimize, and come with no formal guarantees. Our theoretical analysis thus serves as preliminary justification for the observed benefits of such methods, which have so far lacked serious grounding.…”
Section: The Domain-adjusted Regression Objectivementioning
confidence: 99%
See 1 more Smart Citation
“…DARE is closer to methods which learn separate batchnorm parameters per domain over a deep network, possibly adjusting at test-time (Seo et al, 2019;Chang et al, 2019;Segù et al, 2020)-these methods perform well, but they are entirely heuristic based, difficult to optimize, and come with no formal guarantees. Our theoretical analysis thus serves as preliminary justification for the observed benefits of such methods, which have so far lacked serious grounding.…”
Section: The Domain-adjusted Regression Objectivementioning
confidence: 99%
“…Some prior works "normalize" each domain by learning separate batchnorm parameters but sharing the rest of the network. Initially suggested for UDA (Li et al, 2016;Bousmalis et al, 2016;Chang et al, 2019), this idea has also been applied to domain generalization (Seo et al, 2019;Segù et al, 2020) but in a somewhat ad-hoc manner-this is problematic because deep domain generalization methods were recently called into question when Gulrajani & Lopez-Paz (2021) gave convincing evidence that no method beats ERM when evaluated fairly. Nevertheless, our analysis provides an initial justification for these methods, suggesting that this idea is worth exploring further.…”
Section: Related Workmentioning
confidence: 99%