2019
DOI: 10.48550/arxiv.1910.11645
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reducing Domain Gap by Reducing Style Bias

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…To empirically corroborate the effectiveness of IIB, we conduct experiments on DomainBed ( (Li et al 2018b) 39.1 ± 4.4 97.5 ± 0.2 77.5 ± 0.2 78.8 ± 2.2 64.3 ± 1.7 39.9 ± 3.2 38.0 ± 0.1 62.2 MLDG (Li et al 2018a) 36.7 ± 0.2 97.6 ± 0.0 77.2 ± 0.9 82.9 ± 1.7 66.1 ± 0.5 46.2 ± 0.9 41.0 ± 0.2 64.0 IRM (Arjovsky et al 2019) 40.3 ± 4.2 97.0 ± 0.2 76.3 ± 0.6 81.5 ± 0.8 64.3 ± 1.5 41.2 ± 3.6 33.5 ± 3.0 62.0 GroupDRO (Sagawa et al 2019) 36.8 ± 0.1 97.6 ± 0.1 77.9 ± 0.5 83.5 ± 0.2 65.2 ± 0.2 44.9 ± 1.4 33.0 ± 0.3 62.7 MMD (Akuzawa, Iwasawa, and Matsuo 2019) 36.8 ± 0.1 97.8 ± 0.1 77.3 ± 0.5 83.2 ± 0.2 60.2 ± 5.2 46.5 ± 1.5 23.4 ± 9.5 60.7 VREx (Krueger et al 2020a) 36.9 ± 0.3 93.6 ± 3.4 76.7 ± 1.0 81.3 ± 0.9 64.9 ± 1.3 37.3 ± 3.0 33.4 ± 3.1 60.6 ARM (Zhang et al 2020) 36.8 ± 0.0 98.1 ± 0.1 76.6 ± 0.5 81.7 ± 0.2 64.4 ± 0.2 42.6 ± 2.7 35.2 ± 0.1 62.2 Mixup (Yan et al 2020) 33.4 ± 4.7 97.8 ± 0.0 77.7 ± 0.6 83.2 ± 0.4 67.0 ± 0.2 48.7 ± 0.4 38.5 ± 0.3 63.8 RSC (Huang et al 2020) 36.5 ± 0.2 97.6 ± 0.1 77.5 ± 0.5 82.6 ± 0.7 65.8 ± 0.7 40.0 ± 0.8 38.9 ± 0.5 62.7 MTL (Blanchard et al 2021) 35.0 ± 1.7 97.8 ± 0.1 76.6 ± 0.5 83.7 ± 0.4 65.7 ± 0.5 44.9 ± 1.2 40.6 ± 0.1 63.5 SagNet (Nam et al 2021) 36.5 ± 0.1 94.0 ± 3.0 77. For the increased number of parameters in the nonlinear classifier, we correspondingly reduce the number of conv-layers in the backbone network to achieve a balance.…”
Section: Domainbed Experimentsmentioning
confidence: 99%
“…To empirically corroborate the effectiveness of IIB, we conduct experiments on DomainBed ( (Li et al 2018b) 39.1 ± 4.4 97.5 ± 0.2 77.5 ± 0.2 78.8 ± 2.2 64.3 ± 1.7 39.9 ± 3.2 38.0 ± 0.1 62.2 MLDG (Li et al 2018a) 36.7 ± 0.2 97.6 ± 0.0 77.2 ± 0.9 82.9 ± 1.7 66.1 ± 0.5 46.2 ± 0.9 41.0 ± 0.2 64.0 IRM (Arjovsky et al 2019) 40.3 ± 4.2 97.0 ± 0.2 76.3 ± 0.6 81.5 ± 0.8 64.3 ± 1.5 41.2 ± 3.6 33.5 ± 3.0 62.0 GroupDRO (Sagawa et al 2019) 36.8 ± 0.1 97.6 ± 0.1 77.9 ± 0.5 83.5 ± 0.2 65.2 ± 0.2 44.9 ± 1.4 33.0 ± 0.3 62.7 MMD (Akuzawa, Iwasawa, and Matsuo 2019) 36.8 ± 0.1 97.8 ± 0.1 77.3 ± 0.5 83.2 ± 0.2 60.2 ± 5.2 46.5 ± 1.5 23.4 ± 9.5 60.7 VREx (Krueger et al 2020a) 36.9 ± 0.3 93.6 ± 3.4 76.7 ± 1.0 81.3 ± 0.9 64.9 ± 1.3 37.3 ± 3.0 33.4 ± 3.1 60.6 ARM (Zhang et al 2020) 36.8 ± 0.0 98.1 ± 0.1 76.6 ± 0.5 81.7 ± 0.2 64.4 ± 0.2 42.6 ± 2.7 35.2 ± 0.1 62.2 Mixup (Yan et al 2020) 33.4 ± 4.7 97.8 ± 0.0 77.7 ± 0.6 83.2 ± 0.4 67.0 ± 0.2 48.7 ± 0.4 38.5 ± 0.3 63.8 RSC (Huang et al 2020) 36.5 ± 0.2 97.6 ± 0.1 77.5 ± 0.5 82.6 ± 0.7 65.8 ± 0.7 40.0 ± 0.8 38.9 ± 0.5 62.7 MTL (Blanchard et al 2021) 35.0 ± 1.7 97.8 ± 0.1 76.6 ± 0.5 83.7 ± 0.4 65.7 ± 0.5 44.9 ± 1.2 40.6 ± 0.1 63.5 SagNet (Nam et al 2021) 36.5 ± 0.1 94.0 ± 3.0 77. For the increased number of parameters in the nonlinear classifier, we correspondingly reduce the number of conv-layers in the backbone network to achieve a balance.…”
Section: Domainbed Experimentsmentioning
confidence: 99%
“…This can be addressed explicitly, with something as simple as data augmentation [6] that mimics the distributional shift between domains, or it can be learnt explicitly with adversarial learning [7]- [9], which directly optimizes a neural network to remove all domain identifiable information from the feature representations. In a similar vein, there are methods that use disentangled representations [10], [11] (like those used for style transfer networks) that separate domain variant and domain invariant representations, which are often learnt adversarially as well. It is also possible to regularize networks with information bottlenecks [12], or metric learning and statistical feature alignment [13] metalearning [14]- [16], self-supervised learning [17]- [19], and gradient matching between domains [20].…”
Section: ) Domain Generalization Mechanismsmentioning
confidence: 99%
“…• Invariant Risk Minimization (IRM) [59] • Domain Adversarial Neural Networks (DANN) [7] • Maximum Mean Discrepancy (MMD) [8] • Style Agnostic Network (SagNet) [11] Representational Regularization:…”
Section: B Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation