2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
DOI: 10.1109/cvpr52729.2023.01130
|View full text |Cite
|
Sign up to set email alerts
|

DARE-GRAM : Unsupervised Domain Adaptation Regression by Aligning Inverse Gram Matrices

Ismail Nejjar,
Qin Wang,
Olga Fink

Abstract: Unsupervised Domain Adaptation for Regression (UDAR) aims to adapt a model from a labeled source domain to an unlabeled target domain for regression tasks. Recent successful works in UDAR mostly focus on subspace alignment, involving the alignment of a selected subspace within the entire feature space. This contrasts with the feature alignment methods used for classification, which aim at aligning the entire feature space and have proven effective but are less so in regression settings. Specifically, while cla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 85 publications
0
2
0
Order By: Relevance
“…To assess the empirical performance of OTTEHR , we benchmarked OTTEHR against the standard statistical methods Transfer Component Analysis (TCA) [Pan et al, 2010], Correlation Analysis (CA) [Sun et al, 2017], Geodesic Flow Kernel (GFK) [Gong et al, 2012], machine learning OT-based method deepJDOT [Damodaran et al, 2018], and machine learning non-OT-based methods Representation Subspace Distance (RSD) [Chen et al, 2021] and inverse Gram matrices (daregram) [Nejjar et al, 2023] (also see Section 2) on the predictions for target groups for the transfer learning tasks detailed in Section 4.2 using mean absolute error ( MAE ) and root mean square error ( RMSE ) [Chai and Draxler, 2014].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To assess the empirical performance of OTTEHR , we benchmarked OTTEHR against the standard statistical methods Transfer Component Analysis (TCA) [Pan et al, 2010], Correlation Analysis (CA) [Sun et al, 2017], Geodesic Flow Kernel (GFK) [Gong et al, 2012], machine learning OT-based method deepJDOT [Damodaran et al, 2018], and machine learning non-OT-based methods Representation Subspace Distance (RSD) [Chen et al, 2021] and inverse Gram matrices (daregram) [Nejjar et al, 2023] (also see Section 2) on the predictions for target groups for the transfer learning tasks detailed in Section 4.2 using mean absolute error ( MAE ) and root mean square error ( RMSE ) [Chai and Draxler, 2014].…”
Section: Resultsmentioning
confidence: 99%
“…Adversarial learning methods minimize the distribution discrepancy by optimizing a selected function over the hypothesis space, concurrently learning feature representations to bridge the gap between domains [Ganin and Lempitsky, 2015, Tzeng et al, 2015, Ganin et al, 2016, Luo et al, 2017, Long et al, 2018, Zhang et al, 2019, Peng et al, 2019]. For regression tasks, most recent TL methods using Representation Subspace Distance ( RSD ) [Chen et al, 2021] and inverse GRAM matriecs ( daregram ) [Nejjar et al, 2023] learn a shared feature extractor by minimizing some discrepancies of source and target features.…”
Section: Related Workmentioning
confidence: 99%