2022
DOI: 10.1016/j.neunet.2022.02.011
|View full text |Cite
|
Sign up to set email alerts
|

Deep adversarial transition learning using cross-grafted generative stacks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…The CDLM has higher adaptation accuracies for the scenarios with a seemingly large domain gaps, such as MNIST→MNISTM and Fashion→FashionM. For the 3D scenario, our model's performance is a little lower than PixelDA [30] and DATL [56] but outperforms all the other methods. In PixelDA, the input is not only source images but also depth image pairs.…”
Section: Quantitative Results On Unsupervised Domain Adaptationmentioning
confidence: 83%
See 1 more Smart Citation
“…The CDLM has higher adaptation accuracies for the scenarios with a seemingly large domain gaps, such as MNIST→MNISTM and Fashion→FashionM. For the 3D scenario, our model's performance is a little lower than PixelDA [30] and DATL [56] but outperforms all the other methods. In PixelDA, the input is not only source images but also depth image pairs.…”
Section: Quantitative Results On Unsupervised Domain Adaptationmentioning
confidence: 83%
“…For this scenario, only the labels of the source images were available during training. We choose DANN [37] as the baseline, and compare our model with several other state-ofthe-art domain adaptation methods, including Conditional Domain Adaptation Network (CDAN) [52], Pixel-level Domain Adaptation (PixelDA) [30], Unsupervised Imageto-Image translation (UNIT) [4], Cycle-Consistent Adversarial Domain Adaptation (CyCADA) [32], Generate to Adapt (GtA) [31], Transferable Prototypical Networks (TPN) [53], Domain Symmetric Networks (SymmNets-V2) [54], Instance Level Affinitybased Networks (ILA-DA) [55], and Deep Adversarial Transition Learning (DATL) [56].…”
Section: Quantitative Results On Unsupervised Domain Adaptationmentioning
confidence: 99%