2015
DOI: 10.48550/arxiv.1505.07818
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Domain-Adversarial Training of Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
83
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 72 publications
(84 citation statements)
references
References 0 publications
1
83
0
Order By: Relevance
“…There have been recent advances in this field through the use of generative adversarial networks (GANs, see e.g. [14]), which aims to align the domains through unsupervised or semi-supervised learning while retaining good performance on the source domain classification.…”
Section: Resultsmentioning
confidence: 99%
“…There have been recent advances in this field through the use of generative adversarial networks (GANs, see e.g. [14]), which aims to align the domains through unsupervised or semi-supervised learning while retaining good performance on the source domain classification.…”
Section: Resultsmentioning
confidence: 99%
“…Researchers have encouraged invariance or forgetting with VAEs by utilizing various combinations of weak supervision [1], [23], adversarial training [6], and by trading off reconstruction error against encoding capacity according to the information bottleneck principle [25]. Others have sought to fully disentangle the latent space into independent partitions [14], thereby enabling the isolation of wanted from unwanted information.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In contrast, we show that the first layers are not immune to domain shifts even though they are generic feature extractors and that, by improving only the first layer features, we can already improve the performance of the whole network. Finally, [1] shows a significant improvement through the use of a deep network along with a domain regressor. This is done by adding a sub-network consisting of parallel layers to the classification layer.…”
Section: Shallow Damentioning
confidence: 99%
“…Mnist→ MnistM: here, we train a network on Mnist dataset for hand written digits recognition that are black & white, while the target test is the MnistM dataset, composed of the same digits as Mnist [9] but blended with patches from real image [31]. We followed the procedure described by [1] and further verified by having the same results. Synthetic traffic signs→ Dark illumination: in this setting, we want to imitate the real life condition in which we train a system on a large dataset of synthetic images that could as much as possible mimic all the different conditions, and then test it on a dataset affected by a domain shift.…”
Section: Setup and Datasetsmentioning
confidence: 99%
See 1 more Smart Citation