2021
DOI: 10.48550/arxiv.2103.13561
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Evolving Attention Towards Domain Adaptation

Kekai Sheng,
Ke Li,
Xiawu Zheng
et al.

Abstract: Towards better unsupervised domain adaptation (UDA), recently, researchers propose various domain-conditioned attention modules and make promising progresses. However, considering that the configuration of attention, i.e., the type and the position of attention module, affects the performance significantly, it is more generalized to optimize the attention configuration automatically to be specialized for arbitrary UDA scenario. For the first time, this paper proposes EvoADA: a novel framework to evolve the att… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 35 publications
(93 reference statements)
0
1
0
Order By: Relevance
“…A mainstream methodology is distribution alignment, which is mainly based on Maximum Mean Discrepancy (MMD) [1,2,23,25,30] or adversarial methods [9,11,26,45,46]. Besides, some works further make improvement by pseudo-labeling [33], co-training [45], entropy regularization [36], and evolutionary-based architecture design [34]. Recently, increasing researchers focus on more realistic scenarios: considering user privacy, [22,24] investigate the scenario where only source domain models instead of data available while training.…”
Section: Related Workmentioning
confidence: 99%
“…A mainstream methodology is distribution alignment, which is mainly based on Maximum Mean Discrepancy (MMD) [1,2,23,25,30] or adversarial methods [9,11,26,45,46]. Besides, some works further make improvement by pseudo-labeling [33], co-training [45], entropy regularization [36], and evolutionary-based architecture design [34]. Recently, increasing researchers focus on more realistic scenarios: considering user privacy, [22,24] investigate the scenario where only source domain models instead of data available while training.…”
Section: Related Workmentioning
confidence: 99%