2018
DOI: 10.1109/tpami.2017.2704624
|View full text |Cite
|
Sign up to set email alerts
|

Domain Generalization and Adaptation Using Low Rank Exemplar SVMs

Abstract: Domain adaptation between diverse source and target domains is challenging, especially in the real-world visual recognition tasks where the images and videos consist of significant variations in viewpoints, illuminations, qualities, etc. In this paper, we propose a new approach for domain generalization and domain adaptation based on exemplar SVMs. Specifically, we decompose the source domain into many subdomains, each of which contains only one positive training sample and all negative samples. Each subdomain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
62
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 126 publications
(63 citation statements)
references
References 32 publications
1
62
0
Order By: Relevance
“…Second, we compared our DAESVM with unsupervised domain adaptation methods, such as TCA or GFK, implemented to use the same dimension reduction with the parameter in our model. At last, we also compared DAESVM with newly transfer learning models, like low-rank ESVMs [18]. Overall, in a usual transfer learning way, we run datasets across different pairs of source and target domain.…”
Section: Experiments Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Second, we compared our DAESVM with unsupervised domain adaptation methods, such as TCA or GFK, implemented to use the same dimension reduction with the parameter in our model. At last, we also compared DAESVM with newly transfer learning models, like low-rank ESVMs [18]. Overall, in a usual transfer learning way, we run datasets across different pairs of source and target domain.…”
Section: Experiments Resultsmentioning
confidence: 99%
“…(1) Transfer Component Analysis (TCA) [40] (2) Support Vector Machine (SVM) [43] (3) Geodesic Flow Kernel (GFK) [28] (4) Landmarks Selection-based Subspace Alignment (LSSA) [23] (5) Kernel Mean Maximum (KMM) [20] (6) Subspace Alignment (SA) [44] (7) Joint Matching Transfer (TJM) [45] (8) Low-Rank Exemplar-SVMs (LRESVMs) [18] TCA, GFK, and KMM are the classical transfer learning methods. We compare our model with these methods.…”
Section: Experiments Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…Consequently, we base our improvements on Faster R-CNN, a slower, but accurate detector. 1 Domain Adaptation: was initially studied for image classification and the majority of the domain adaptation literature focuses on this problem [10,9,29,21,20,12,48,32,33,14,13,17,1,37,30]. Some of the methods developed in this context include cross-domain kernel learning methods such as adaptive multiple kernel learning (A-MKL) [10], domain transfer multiple kernel learning (DTMKL) [9], and geodesic flow kernel (GFK) [20].…”
Section: Previous Workmentioning
confidence: 99%
“…However, in real world, it is impossible to guarantee that assumption. Hence, in visual recognition tasks, classifier or model usually does not work well because of data bias between the distributions of the training and test data [1], [2] [3], [4], [5], [6], [7]. The domain discrepancy constitutes a major obstacle in training the predictive models across domains.…”
mentioning
confidence: 99%