2020
DOI: 10.48550/arxiv.2006.13629
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Domain Adaptation: Representations, Weights and Inductive Bias

Abstract: Unsupervised Domain Adaptation (UDA) has attracted a lot of attention in the last ten years. The emergence of Domain Invariant Representations (IR) has improved drastically the transferability of representations from a labelled source domain to a new and unlabelled target domain. However, a potential pitfall of this approach, namely the presence of label shift, has been brought to light. Some works address this issue with a relaxed version of domain invariance obtained by weighting samples, a strategy often re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…Outstanding progress have been towards learning more domain transferable representations by looking for domain invariance. The tensorial product between representations and prediction promotes conditional domain invariance [41], the use of weights [10,62,7,14] has dramatically improved the problem of label shift theoretically described in [64], hallucinating consistent target samples [38], penalizing high singular values of batch of representations [12] or by enforcing the favorable inductive bias of consistence through various data augmentation in the target domain [45]. Recent works address the problem of adaptation without source data [37,61].…”
Section: Discussionmentioning
confidence: 99%
“…Outstanding progress have been towards learning more domain transferable representations by looking for domain invariance. The tensorial product between representations and prediction promotes conditional domain invariance [41], the use of weights [10,62,7,14] has dramatically improved the problem of label shift theoretically described in [64], hallucinating consistent target samples [38], penalizing high singular values of batch of representations [12] or by enforcing the favorable inductive bias of consistence through various data augmentation in the target domain [45]. Recent works address the problem of adaptation without source data [37,61].…”
Section: Discussionmentioning
confidence: 99%
“…The main idea of this paper is to incorporate an appropriate regularization term to learn a classifier f for target domain. Specifically, we attempt to discard the domain alignment, since misalignment tends to do more harm than good in PDA [11] and even in UDA [22]. Motivated by the theoretical analysis, we turn to focus on the model smoothness.…”
Section: Main Ideamentioning
confidence: 99%
“…However, the target samples might be wrongly predicted into the irrelevant categories in the beginning under the domain shift, as shown in Fig. 1 (b) [18], [19], [20], [21], [22], making it hard to obtain a perfect alignment. The experimental results of these methods also confirm that it is difficult to accurately identify the irrelevant categories [11], [12], [17] in source domain.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations