2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00442
|View full text |Cite
|
Sign up to set email alerts
|

Structure Preserving Generative Cross-Domain Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 37 publications
(12 citation statements)
references
References 28 publications
0
12
0
Order By: Relevance
“…Specifically, TSA empowers BSP with 4.9% improvement, achieving the highest accuracy 71.2% on average. Based on these promising re- [20] 92.1 100.0 100.0 88.0 71.0 67.8 86.2 MCD [41] 88.6 98.5 100.0 92.2 69.5 69.7 86.5 BNM [7] 91.5 98.5 100.0 90.3 70.9 71.6 87.1 DMRL [48] 90.8 99.0 100.0 93.4 73.0 71.2 87.9 SymNets [54] 90.8 98.8 100.0 93.9 74.6 72.5 88.4 TAT [23] 92.5 99.3 100.0 93.2 73.1 72.1 88.4 MDD [53] 94.5 98.4 100.0 93.5 74.6 72.2 88.9 GVB-GD [8] 94.8 98.7 100.0 95.0 73.4 73.7 89.3 GSP [49] 92.9 98.7 99.8 94.5 75.9 74.9 89.5 ResNet-50 [13] sults, we can infer that TSA can stably enhance the transferability of classifiers on this difficult cross-domain dataset.…”
Section: Resultsmentioning
confidence: 99%
“…Specifically, TSA empowers BSP with 4.9% improvement, achieving the highest accuracy 71.2% on average. Based on these promising re- [20] 92.1 100.0 100.0 88.0 71.0 67.8 86.2 MCD [41] 88.6 98.5 100.0 92.2 69.5 69.7 86.5 BNM [7] 91.5 98.5 100.0 90.3 70.9 71.6 87.1 DMRL [48] 90.8 99.0 100.0 93.4 73.0 71.2 87.9 SymNets [54] 90.8 98.8 100.0 93.9 74.6 72.5 88.4 TAT [23] 92.5 99.3 100.0 93.2 73.1 72.1 88.4 MDD [53] 94.5 98.4 100.0 93.5 74.6 72.2 88.9 GVB-GD [8] 94.8 98.7 100.0 95.0 73.4 73.7 89.3 GSP [49] 92.9 98.7 99.8 94.5 75.9 74.9 89.5 ResNet-50 [13] sults, we can infer that TSA can stably enhance the transferability of classifiers on this difficult cross-domain dataset.…”
Section: Resultsmentioning
confidence: 99%
“…To this end, [24] proposes to translate the source data to target data for domain adaptation in segmentation task. Several researches [25]- [32] focus on utilizing generative adversarial architecture to transfer domaininvariant knowledge across domains in the feature space. Besides, Tsai et al [33] develop a multi-level contextual adaptation framework in the output space.…”
Section: B Semantic Knowledge Transfermentioning
confidence: 99%
“…Since the target domain is unlabeled, these works rely on predicting pseudo-labeling (Kang et al, 2019) or computing prototype representations of source and target classes (Wang & Breckon, 2020), and then the target domain samples are classified by the prototype of the target domain classes during the training process. Structure-preserving methods (Ren et al, 2019;Xia & Ding, 2020) try to achieve the class-level transfer by matching the structure graphs across domains. However, as observed by Chen et al (2019), the discrim-inability may be decreased when the models only focus on enhancing transferability.…”
Section: Class-specific Learningmentioning
confidence: 99%