“…For this scenario, only the labels of the source images were available during training. We choose DANN [37] as the baseline, and compare our model with several other state-ofthe-art domain adaptation methods, including Conditional Domain Adaptation Network (CDAN) [52], Pixel-level Domain Adaptation (PixelDA) [30], Unsupervised Imageto-Image translation (UNIT) [4], Cycle-Consistent Adversarial Domain Adaptation (CyCADA) [32], Generate to Adapt (GtA) [31], Transferable Prototypical Networks (TPN) [53], Domain Symmetric Networks (SymmNets-V2) [54], Instance Level Affinitybased Networks (ILA-DA) [55], and Deep Adversarial Transition Learning (DATL) [56].…”