2020
DOI: 10.1007/978-3-030-58545-7_10
|View full text |Cite
|
Sign up to set email alerts
|

Learning from Extrinsic and Intrinsic Supervisions for Domain Generalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
84
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 136 publications
(84 citation statements)
references
References 38 publications
0
84
0
Order By: Relevance
“…The results of ERM [49], GroupDRO [44], Mixup [58], MLDG [22], DANN [12], C-DANN [32], CORAL [47], MMD [24], IRM [3], ARM [60], MTL [5], SagNet [37], and RSC [17] are cited from [14] except for Office-31, on which we conduct experiments and report the results reproduced by us. For PACS, we also compare our method with EisNet [57], Entropy Regularization (ER) [62], DMG [7], DSON [46], MASF [9] and MetaReg [4] which also use ResNet-50 as the backbone, the results of these methods are directly cited from original papers. It is worth noting that from the results reported in previous DG research over the years, one can easily find that even a marginal improvement (e.g., 1%) on average accuracy is challenging in the DG community.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
“…The results of ERM [49], GroupDRO [44], Mixup [58], MLDG [22], DANN [12], C-DANN [32], CORAL [47], MMD [24], IRM [3], ARM [60], MTL [5], SagNet [37], and RSC [17] are cited from [14] except for Office-31, on which we conduct experiments and report the results reproduced by us. For PACS, we also compare our method with EisNet [57], Entropy Regularization (ER) [62], DMG [7], DSON [46], MASF [9] and MetaReg [4] which also use ResNet-50 as the backbone, the results of these methods are directly cited from original papers. It is worth noting that from the results reported in previous DG research over the years, one can easily find that even a marginal improvement (e.g., 1%) on average accuracy is challenging in the DG community.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
“…DomainNet includes 345 object categories from six domains (i.e., Sketch, Real, Quickdraw, Painting, Infograph, and Clipart). Following previous works [12,8], we selected one domain as an unseen domain and used the remaining domains for training. To introduce the notion of an open set, we split all classes into three subsets: C k , C su , and As for baselines (i.e., L DG only), we used three DG methods: Deep All, JiGen [11], and MMLD [8].…”
Section: Methodsmentioning
confidence: 99%
“…In the open-set setting proposed by Panareda Busto and Gall [5], examples of unknown classes are given in both source and target domains. Since then, Saito et al [6] and You et al [7] In many DG approaches, the feature distributions among source domains are aligned by adversarial training [8], metalearning [9,10], or self-supervised learning [11,12]. In this study, we introduce OSDG, which is the combination of DG and open-set recognition in the vein of Panareda Busto and Gall [5], and design the novel loss for OSDG.…”
Section: Related Workmentioning
confidence: 99%
“…Domain augmentation is another popular approach that aims to generate out-of-distribution samples, e.g., via domain-adversarial generation [1,35] or gradient-based adversarial attacks [34,42]. Recently, some regularization methods have also been proposed to address domain generalization via shapebiased learning [27] or self-supervise methods [4,43]. Nevertheless, these methods cannot guarantee that features from unseen target domains share the same representation space learned by domain-invariant learning or regularization methods, since the learned model might be overfitting in the source domains due to the absence of target domain.…”
Section: Related Workmentioning
confidence: 99%
“…However, since collecting data from arbitrary target domains is expensive and impractical, domain generalization (DG) is proposed as a more challenging and practical problem setting, which aims to use multiple source domains to train a model that can generalize well to arbitrary unseen target domains. This setting has aroused wide attention from researchers and many works have been proposed to address DG problem, mainly via domain-invariant learning [16,29], meta-learning [2,8], domain augmentation [34,42], shape-biased learning [27,43], etc. However, these methods unavoidably face the problem of overfitting in the source domains because the target domain is inaccessible in the training procedure, i.e., the existing methods are inclined to focus on these limited source domains, which will lead to unsatisfactory generalizing ability in unseen target domain.…”
Section: Introductionmentioning
confidence: 99%