2020
DOI: 10.1007/978-3-030-58542-6_5
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Optimize Domain Specific Normalization for Domain Generalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
107
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 171 publications
(110 citation statements)
references
References 20 publications
2
107
0
Order By: Relevance
“…However, as pointed out by [55], interpretable domain generalization is still a challenge in future research. Similar to our work, methods in [7,45,46] argue that domainspecific factors also matter in DG. Specifically, [45] measures domain similarities in a transferrable domain space that is implicitly learned by domain-specific batch normalization layers.…”
Section: Related Worksupporting
confidence: 85%
See 1 more Smart Citation
“…However, as pointed out by [55], interpretable domain generalization is still a challenge in future research. Similar to our work, methods in [7,45,46] argue that domainspecific factors also matter in DG. Specifically, [45] measures domain similarities in a transferrable domain space that is implicitly learned by domain-specific batch normalization layers.…”
Section: Related Worksupporting
confidence: 85%
“…The results of ERM [49], GroupDRO [44], Mixup [58], MLDG [22], DANN [12], C-DANN [32], CORAL [47], MMD [24], IRM [3], ARM [60], MTL [5], SagNet [37], and RSC [17] are cited from [14] except for Office-31, on which we conduct experiments and report the results reproduced by us. For PACS, we also compare our method with EisNet [57], Entropy Regularization (ER) [62], DMG [7], DSON [46], MASF [9] and MetaReg [4] which also use ResNet-50 as the backbone, the results of these methods are directly cited from original papers. It is worth noting that from the results reported in previous DG research over the years, one can easily find that even a marginal improvement (e.g., 1%) on average accuracy is challenging in the DG community.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
“…Recently, some methods are also proposed to address the domain generalization problem in the classification and semantic segmentation tasks [27], [28], [29], [30], [31], [32], [33], [34]. Particularly, the data augmentation based methods have obtained significant advance in these task, which can be divided into image-level and feature-level data augmentation.…”
Section: B Domain Generalizationmentioning
confidence: 99%
“…Domain generalization. Unlike domain adaptation, domain generalization [3,4,5] cannot use any sample of the target domain, but it still has to capture transferable information across domains. To complete the classification task without the target domain available at the time of training, labelled data from several related classification tasks can be used.…”
Section: Related Workmentioning
confidence: 99%