2018
DOI: 10.1007/978-3-030-01267-0_38
|View full text |Cite
|
Sign up to set email alerts
|

Deep Domain Generalization via Conditional Invariant Adversarial Networks

Abstract: Domain generalization aims to apply knowledge gained from multiple labeled source domains to unseen target domains. The main difficulty comes from the dataset bias: training data and test data have different distributions, and the training set contains heterogeneous samples from different distributions. Let X denote the features, and Y be the class labels. Existing domain generalization methods address the dataset bias problem by learning a domain-invariant representation h(X) that has the same marginal distri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
446
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 482 publications
(447 citation statements)
references
References 22 publications
1
446
0
Order By: Relevance
“…TF is the low-rank parametrized network that was presented together with the dataset PACS in [27]. CIDDG is the conditional invariant deep domain generalization method presented in [29] that trains for image classification with two adversarial constraints: one that maximizes the overall domain confusion following [19] and a second one that does the same per-class. In the DeepC variant, only this second condition is enabled.…”
Section: Multi-source Domain Generalizationmentioning
confidence: 99%
“…TF is the low-rank parametrized network that was presented together with the dataset PACS in [27]. CIDDG is the conditional invariant deep domain generalization method presented in [29] that trains for image classification with two adversarial constraints: one that maximizes the overall domain confusion following [19] and a second one that does the same per-class. In the DeepC variant, only this second condition is enabled.…”
Section: Multi-source Domain Generalizationmentioning
confidence: 99%
“…For domain generalization, the training data always contains more than one source domain. Most of the existing domain generalization methods [39,20,22,38] split the source data as 70% -30%…”
Section: Correlation-aware Adversarial Domain Generalization (Caadg)mentioning
confidence: 99%
“…Office-Caltech, we compare our method on the DG scenario with the state-of-the-art DG methods: learned -support vector machine (L-SVM) [47], kernel fisher discriminant analysis (KDA) [49], domain-invariant component analysis (DICA) [36], multi-task auto-encoder (MTAE) [20], domain separation network (DSN) [48], deeper, broader and artier domain generalization (DBADG) [21], conditional invariant deep domain generalization (CIDDG) [38], undoing the damage of dataset bias (Undo-Bias) [19], unbiased metric learning (UML) [46], multi-task autoencoders (MTAE) [20] and deep domain generalization with structured low-rank constraint (DGLRC) [22].…”
Section: Alexnetmentioning
confidence: 99%
“…However, in practice, weight sharing reduces the number of parameters to be learned and GPU memory requirement while training. Joint optimization of deep networks on inputs from multiple domains has been studied in several recent works [21,33,41,43,73,94,102]. Genova et al [24] use unsupervised regression for 3D face modelling.…”
Section: Domain Adaptation For Deep Learningmentioning
confidence: 99%