“…Adversarial learning methods minimize the distribution discrepancy by optimizing a selected function over the hypothesis space, concurrently learning feature representations to bridge the gap between domains [Ganin and Lempitsky, 2015, Tzeng et al, 2015, Ganin et al, 2016, Luo et al, 2017, Long et al, 2018, Zhang et al, 2019, Peng et al, 2019]. For regression tasks, most recent TL methods using Representation Subspace Distance ( RSD ) [Chen et al, 2021] and inverse GRAM matriecs ( daregram ) [Nejjar et al, 2023] learn a shared feature extractor by minimizing some discrepancies of source and target features.…”