2021
DOI: 10.1007/978-3-030-67658-2_23
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Multi-source Domain Adaptation for Regression

Abstract: We consider the problem of unsupervised domain adaptation from multiple sources in a regression setting. We propose in this work an original method to take benefit of different sources using a weighted combination of the sources. For this purpose, we define a new measure of similarity between probabilities for domain adaptation which we call hypothesisdiscrepancy. We then prove a new bound for unsupervised domain adaptation combining multiple sources. We derive from this bound a novel adversarial domain adapta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(25 citation statements)
references
References 11 publications
0
17
0
Order By: Relevance
“…This process aims to minimize the H-divergence [1]. Since this seminal work, many domain adaptation methods based on adversarial networks have been proposed with other metrics as the Wasserstein's [27], the discrepancy [26], the disparity discrepancy [35] and the hdiscrepancy [23]. The latter work highlights the advantage of using discrepancy to handle regression tasks over the Hdivergence.…”
Section: A Adversarial Domain Adaptationmentioning
confidence: 99%
See 3 more Smart Citations
“…This process aims to minimize the H-divergence [1]. Since this seminal work, many domain adaptation methods based on adversarial networks have been proposed with other metrics as the Wasserstein's [27], the discrepancy [26], the disparity discrepancy [35] and the hdiscrepancy [23]. The latter work highlights the advantage of using discrepancy to handle regression tasks over the Hdivergence.…”
Section: A Adversarial Domain Adaptationmentioning
confidence: 99%
“…Adversarial domain adaptation methods were originally introduced to learn an encoding space in which the source and target data can not be distinguished by any domain classifier, with the encoder and domain classifier trained with opposing objectives [13]. Since then, adversarial training has been used as a powerful tool to minimize complex losses defined as maxima on functional spaces [23], [26], [27], [35]. The success of adversarial methods lies in their computational speed and ability to scale to large data sets through the use of stochastic gradient descent-ascent algorithms.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…The DAT approach has been applied and confirmed to effectively compensate for the mismatch of source (training time) and target (testing time) conditions in numerous tasks, such as speech signal processing [19,20], image processing [15,21], and wearable sensor signal processing [22]. A later development in Multisource Domain Adversarial Networks (MDAN) [23] extended the original DAT to lift the constraint of single domain transition, utilizing multiple domain classifiers to extract discriminative deep features for the main learning task while being invariant to multiple domain shifts [3,24].…”
Section: Related Workmentioning
confidence: 99%