2014 IEEE Conference on Computer Vision and Pattern Recognition 2014
DOI: 10.1109/cvpr.2014.318
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptation on the Statistical Manifold

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
69
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 129 publications
(69 citation statements)
references
References 17 publications
0
69
0
Order By: Relevance
“…Similarly, Baktashmotlagh et al [18] propose a statistically invariant sample selection method to choose landmarks using the Hellinger distance instead of MMD. However, in their approaches, the landmarks are selected from the source domain, and are not necessarily the samples close to the target domain, while our approach identifies the target instances, which are more likely to be correctly predicted by the source classifier.…”
Section: Related Workmentioning
confidence: 98%
See 1 more Smart Citation
“…Similarly, Baktashmotlagh et al [18] propose a statistically invariant sample selection method to choose landmarks using the Hellinger distance instead of MMD. However, in their approaches, the landmarks are selected from the source domain, and are not necessarily the samples close to the target domain, while our approach identifies the target instances, which are more likely to be correctly predicted by the source classifier.…”
Section: Related Workmentioning
confidence: 98%
“…By utilizing (18), it is necessary to have the knowledge of the distribution of Z to calculate ρ. According to the definition of Z , it is a linear projection from high dimensional to 1-D space, essentially.…”
Section: Justificationmentioning
confidence: 99%
“…Geodesic flow kernel (GFK) [25] extended the idea of sampled points in manifold [27] and proposed to learn the geodesic flow kernel between domains. The work of [3] used a Hellinger distance to approximate the geodesic distance in Riemann space. [2] proposed to use Grassmann for domain adaptation, but they ignored the conditional distribution alignment.…”
Section: Subspace and Manifold Learningmentioning
confidence: 99%
“…However, in this paper, we argue that this assumption is not practical in real applications. For example, when two domains are very dissimilar (e.g., transfer learning between (1) and (3) in Fig. 1(a)), the marginal distribution is more important.…”
Section: Introductionmentioning
confidence: 99%
“…Semisupervised domain adaptation usually constructs learning model using the priori information from the class labels of the source and target samples [11], or the pairwise similarity between them [3,4]. As no class label nor similarity information of the source and target samples is available, unsupervised domain adaptation often exploits the underlying geometry structure [15,16] or the intrinsic probability distribution of data across domains to bridge the gap of domains [9,17]. In this paper, we focus on the issue of unsupervised visual domain adaptation, which is more common in real world applications and relatively more challenging as well.…”
Section: Introductionmentioning
confidence: 99%