2016
DOI: 10.48550/arxiv.1612.07828
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning from Simulated and Unsupervised Images through Adversarial Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
97
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 58 publications
(97 citation statements)
references
References 0 publications
0
97
0
Order By: Relevance
“…Theorem 2.1 is the foundation of many recent works on unsupervised domain adaptation via learning invariant representations [Ajakan et al, 2014, Ganin et al, 2016, Zhao et al, 2018b, Pei et al, 2018, Zhao et al, 2018a. It has also inspired various applications of domain adaptation with adversarial learning, e.g., video analysis [Hoffman et al, 2016, Shrivastava et al, 2016, natural language understanding [Zhang et al, 2017, Fu et al, 2017, speech recognition [Zhao et al, 2019a, Hosseini-Asl et al, 2018, to name a few. At a high level, the key idea is to learn a rich and parametrized feature transformation g : X → Z such that the induced source and target distributions (on Z) are close, as measured by the Hdivergence.…”
Section: A Theoretical Model For Domain Adaptationmentioning
confidence: 99%
“…Theorem 2.1 is the foundation of many recent works on unsupervised domain adaptation via learning invariant representations [Ajakan et al, 2014, Ganin et al, 2016, Zhao et al, 2018b, Pei et al, 2018, Zhao et al, 2018a. It has also inspired various applications of domain adaptation with adversarial learning, e.g., video analysis [Hoffman et al, 2016, Shrivastava et al, 2016, natural language understanding [Zhang et al, 2017, Fu et al, 2017, speech recognition [Zhao et al, 2019a, Hosseini-Asl et al, 2018, to name a few. At a high level, the key idea is to learn a rich and parametrized feature transformation g : X → Z such that the induced source and target distributions (on Z) are close, as measured by the Hdivergence.…”
Section: A Theoretical Model For Domain Adaptationmentioning
confidence: 99%
“…In their seminal work, Shrivastava et al [8] developed a CGAN conditioned on artificial images of eyes with gaze di- rection semantic information that produced realistic-looking eye images while maintaining this contextual information. They introduced a novel self-regularization term that minimized the pixel-to-pixel l 1 norm in a learned feature space between the contextual input image and the resulting filtered image.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, adversarial domain adaptation based on GANs (Goodfellow et al, 2014) have shown encouraging results for unsupervised domain adaptation directly at the pixel level. These techniques learn a generative model for source-to-target image translation, including from and to multiple domains (Taigman et al, 2016;Shrivastava et al, 2016;Zhu et al, 2017;Isola et al, 2017;Kim et al, 2017). In particular, CycleGAN (Zhu et al, 2017) leverages cycle consistency using a forward GAN and a backward GAN to improve the training stability and performance of image-to-image translation.…”
Section: Related Workmentioning
confidence: 99%
“…Several related works propose GAN-based unsupervised domain adaptation methods to address the specific domain gap between synthetic and real-world images. SimGAN (Shrivastava et al, 2016) leverages simulation for the automatic generation of large annotated datasets with the goal of refining synthetic images to make them look more realistic. Sadat Saleh et al (2018) effectively leverages synthetic data by treating foreground and background in different manners.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation