2023
DOI: 10.1109/tnnls.2021.3093468
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking Maximum Mean Discrepancy for Visual Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(12 citation statements)
references
References 39 publications
0
12
0
Order By: Relevance
“…We adopt maximum mean discrepancy (MMD) to measure the data distribution differences between the source and target domains before and after domain adaptation. As a popular metric, the maximum mean discrepancy (MMD) has been widely used in domain adaptation research ( Kumagai and Iwata, 2019 ; Long et al, 2013 ; 2014 ; Pan et al, 2010 ; Wang et al, 2021 ; Yan et al, 2017 ), defined as follows: where denotes the Reproducing Kernel Hilbert Space endowed with a kernel function k , and . If the MMD distance of source and target domains gets lower after adaptation, it indicates the data distribution difference becomes smaller.…”
Section: Empirical Evaluation Of Feature-level Data Adaptation Algori...mentioning
confidence: 99%
“…We adopt maximum mean discrepancy (MMD) to measure the data distribution differences between the source and target domains before and after domain adaptation. As a popular metric, the maximum mean discrepancy (MMD) has been widely used in domain adaptation research ( Kumagai and Iwata, 2019 ; Long et al, 2013 ; 2014 ; Pan et al, 2010 ; Wang et al, 2021 ; Yan et al, 2017 ), defined as follows: where denotes the Reproducing Kernel Hilbert Space endowed with a kernel function k , and . If the MMD distance of source and target domains gets lower after adaptation, it indicates the data distribution difference becomes smaller.…”
Section: Empirical Evaluation Of Feature-level Data Adaptation Algori...mentioning
confidence: 99%
“…The instance-based adaptation methods seek a strategy which selects "good" samples in the source domain to participate in model training and suppresses "bad" samples to prevent negative transfer. Kernel mean matching (KMM) [23] minimizes the maximum mean difference [24] and weighs the source data and target data in the reproducing kernel Hilbert space (RKHS), which can correct the inconsistent distribution between domains. As a classic instance-based adaptation method, transfer adaptive boosting (TrAdaBoost) [25] extends the AdaBoost algorithm to weigh source-labeled samples and target-labeled samples to match the distributions between domains.…”
Section: Domain Adaptation (Da)mentioning
confidence: 99%
“…Their objective is to enhance the generalization of a model trained on a training set (source domain) to a test set (target domain), where these two domains exhibit some degree of relevance but follow different distributions (Zhang, 2019;Jiang et al, 2022b). Recently, domain adaptation techniques have found extensive applications in computer vision tasks such as image classification (Liu et al, 2021a;Wang et al, 2022;2023b), semantic segmentation (Cheng et al, 2021;Wang et al, 2023c), and object detection (Chen et al, 2018;Saito et al, 2019;Zhu et al, 2019;Jiang et al, 2022a), delivering outstanding performance.…”
Section: Introductionmentioning
confidence: 99%