2020
DOI: 10.48550/arxiv.2006.14263
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Target Consistency for Domain Adaptation: when Robustness meets Transferability

Abstract: Learning Invariant Representations has been successfully applied for reconciling a source and a target domain for Unsupervised Domain Adaptation. By investigating the robustness of such methods under the prism of the cluster assumption, we bring new evidence that invariance with a low source risk does not guarantee a wellperforming target classifier. More precisely, we show that the cluster assumption is violated in the target domain despite being maintained in the source domain, indicating a lack of robustnes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 38 publications
(69 reference statements)
0
1
0
Order By: Relevance
“…Outstanding progress have been towards learning more domain transferable representations by looking for domain invariance. The tensorial product between representations and prediction promotes conditional domain invariance [41], the use of weights [10,62,7,14] has dramatically improved the problem of label shift theoretically described in [64], hallucinating consistent target samples [38], penalizing high singular values of batch of representations [12] or by enforcing the favorable inductive bias of consistence through various data augmentation in the target domain [45]. Recent works address the problem of adaptation without source data [37,61].…”
Section: Discussionmentioning
confidence: 99%
“…Outstanding progress have been towards learning more domain transferable representations by looking for domain invariance. The tensorial product between representations and prediction promotes conditional domain invariance [41], the use of weights [10,62,7,14] has dramatically improved the problem of label shift theoretically described in [64], hallucinating consistent target samples [38], penalizing high singular values of batch of representations [12] or by enforcing the favorable inductive bias of consistence through various data augmentation in the target domain [45]. Recent works address the problem of adaptation without source data [37,61].…”
Section: Discussionmentioning
confidence: 99%