2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00151
|View full text |Cite
|
Sign up to set email alerts
|

Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation

Abstract: Domain adaptation enables the learner to safely generalize into novel environments by mitigating domain shifts across distributions. Previous works may not effectively uncover the underlying reasons that would lead to the drastic model degradation on the target task. In this paper, we empirically reveal that the erratic discrimination of the target domain mainly stems from its much smaller feature norms with respect to that of the source domain. To this end, we propose a novel parameter-free Adaptive Feature N… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
254
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 469 publications
(258 citation statements)
references
References 38 publications
4
254
0
Order By: Relevance
“…Zhang et al [14] presented a two-domain-classifier strategy to identify the importance scores of source samples, and match the reweighed source domain and the target domain based on adversarial learning. In addition, Xu et al [15] proposed to adapt feature norms of both domains to achieve equilibrium, which is free from relationship of the label spaces. Recently, some methods apply ensemble learning to improve the discrimination ability of extracted features [37], [38].…”
Section: B Partial Domain Adaptationmentioning
confidence: 99%
“…Zhang et al [14] presented a two-domain-classifier strategy to identify the importance scores of source samples, and match the reweighed source domain and the target domain based on adversarial learning. In addition, Xu et al [15] proposed to adapt feature norms of both domains to achieve equilibrium, which is free from relationship of the label spaces. Recently, some methods apply ensemble learning to improve the discrimination ability of extracted features [37], [38].…”
Section: B Partial Domain Adaptationmentioning
confidence: 99%
“…Domain discrepancies commonly exist across different datasets, and variant domain adaptation methods [29,37,46] are intensively proposed to learn domain-invariant feature thus that classier/predictor learned using the source datasets can be generalized to the target [19,26] for visual interaction learning and reasoning. These works propose to explicitly model label dependencies in the form of graphs and adopt the graph to guide feature interaction learning.…”
Section: Adversarial Domain Adaptationmentioning
confidence: 99%
“…Apart from MMD, Zhang et al [22] define a new divergence, Margin Disparity Discrepancy (MDD), and validate that it has rigorous generalization bounds. SAFN in [39] not only utilizes norm to quantitatively measure domain statistics but also suggests that larger norm features can boost knowledge transfer.…”
Section: Discrepancy Metric Minimizationmentioning
confidence: 99%