2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00407
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Domain Adaptation using Deep Networks with Cross-Grafted Stacks

Abstract: Current deep domain adaptation methods used in computer vision have mainly focused on learning discriminative and domain-invariant features across different domains. In this paper, we present a novel approach that bridges the domain gap by projecting the source and target domains into a common association space through an unsupervised "cross-grafted representation stacking" (CGRS) mechanism. Specifically, we construct variational auto-encoders (VAE) for the two domains, and form bidirectional associations by c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 28 publications
(44 reference statements)
0
7
0
Order By: Relevance
“…Variational Autoencoder (VAE) has been a prevalent generative model for data generation and it has been used for UDA in literature (Chen, Chen, Jin, Liu, & Cheng, 2019;Hou, Ding, Deng, & Cranefield, 2019;Hsu et al, 2017;Ilse, Tomczak, Louizos, & Welling, 2020;Xu et al, 2020). Hou et al (2019) aim to generate synthetic target-domain data with VAEs trained domain-wisely.…”
Section: Uda With Data Augmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Variational Autoencoder (VAE) has been a prevalent generative model for data generation and it has been used for UDA in literature (Chen, Chen, Jin, Liu, & Cheng, 2019;Hou, Ding, Deng, & Cranefield, 2019;Hsu et al, 2017;Ilse, Tomczak, Louizos, & Welling, 2020;Xu et al, 2020). Hou et al (2019) aim to generate synthetic target-domain data with VAEs trained domain-wisely.…”
Section: Uda With Data Augmentationmentioning
confidence: 99%
“…Variational Autoencoder (VAE) has been a prevalent generative model for data generation and it has been used for UDA in literature (Chen, Chen, Jin, Liu, & Cheng, 2019;Hou, Ding, Deng, & Cranefield, 2019;Hsu et al, 2017;Ilse, Tomczak, Louizos, & Welling, 2020;Xu et al, 2020). Hou et al (2019) aim to generate synthetic target-domain data with VAEs trained domain-wisely. Subsequently, the higher-level and lowerlevel layers of the decoders for source and target domains are cross-stacked to form new VAEs which can be used to transform images from one domain to the other.…”
Section: Uda With Data Augmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…There has been a large body of research showing that approaches using generative models are useful for improving a model's generalization performance across domains [7]. Generative models mimic real-world data distributions to generate data in the target domain for adaptation, especially using generative adversarial neural networks (GANs) or autoencoder structures [7][8][9][10][11][12][13].…”
Section: Introductionmentioning
confidence: 99%
“…This paper is extended from a conference publication [15], where the CGGS-based framework was first presented and some preliminary experiment results were provided. Here in this work we give a detailed presentation of the entire framework, with the following additional technical and experimental contents introduced: (1) A theoretical explanation of the generation of transition spaces as obtained from a probabilistic weights perturbation perspective; (2) new experimental results on more benchmark datasets to expand the empirical evaluation of DATL against the state of the art, plus the new results on cross-task generalization; and (3) further analysis and visual evaluation of the DATL framework.…”
Section: Introductionmentioning
confidence: 99%