2019
DOI: 10.1109/tcyb.2018.2820174
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Independently Together: A Generalized Framework for Domain Adaptation

Abstract: Currently, unsupervised heterogeneous domain adaptation in a generalized setting, which is the most common scenario in real-world applications, is under insufficient exploration. Existing approaches either are limited to special cases or require labeled target samples for training. This paper aims to overcome these limitations by proposing a generalized framework, named as transfer independently together (TIT). Specifically, we learn multiple transformations, one for each domain (independently), to map data on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
69
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 250 publications
(71 citation statements)
references
References 35 publications
1
69
0
1
Order By: Relevance
“…In addition, there are also some works to learn an intermediate space shared by the visual features and semantic features [4,38,39]. Besides, ZSL is also related with domain adaptation and cold-start recommendation [18][19][20][21].…”
Section: Zero-shot Learningmentioning
confidence: 99%
“…In addition, there are also some works to learn an intermediate space shared by the visual features and semantic features [4,38,39]. Besides, ZSL is also related with domain adaptation and cold-start recommendation [18][19][20][21].…”
Section: Zero-shot Learningmentioning
confidence: 99%
“…According to the property of sample features, domain adaptation can be categorized as homogeneous domain adaptation [6], [18], [29] and heterogeneous domain adaptation [19], [20], [30]. In homogeneous domain adaptation, the data from both domains are sampled from the same feature space.…”
Section: A Domain Adaptationmentioning
confidence: 99%
“…According to the constrained optimization theory, we introduce a Lagrange multiplier Φ, and get the Lagrange function for Eq. (19) as follows:…”
Section: ) Overallmentioning
confidence: 99%
“…Moreover, some deep neural networks (DNN) based models [16]- [18] are proposed to extract nonlinear features with stronger representation power. It is worth mentioning that some work in other related areas also provided feasible solutions, such as low-rank coding [19], Gaussian Process Latent Variable Model (GPLVM) [20] or Maximum Mean Discrepancy (MMD) [21], [22]. Despite promising results on real applications, aforementioned MvSL based methods will fail to work, when the view information of test samples is unknown in advance [23].…”
Section: Introductionmentioning
confidence: 99%