2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00894
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
46
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(46 citation statements)
references
References 17 publications
0
46
0
Order By: Relevance
“…Oracle means the model is trained and tested on the specific target domain. Our method achieves Dice scores that are about 55% and 13% higher than those from [26], [24] and [27] when using OCTA (OCTA-500) and fundus image (DRIVE) as the source domain. For the dual teachers, T sim performs well on D sim and T dis achieves descent performance on D dis .…”
Section: Resultsmentioning
confidence: 92%
See 1 more Smart Citation
“…Oracle means the model is trained and tested on the specific target domain. Our method achieves Dice scores that are about 55% and 13% higher than those from [26], [24] and [27] when using OCTA (OCTA-500) and fundus image (DRIVE) as the source domain. For the dual teachers, T sim performs well on D sim and T dis achieves descent performance on D dis .…”
Section: Resultsmentioning
confidence: 92%
“…All methods are evaluated using two metrics, i.e., Dice[%] and 95% Hausdorff Distance (HD[px]), the results of which are tabulated in Table 1. We compare RVms with two recently-developed SOTA DA/MTDA models, namely ADVENT [26] and Multi-Dis [24]. Note that [26] is trained with mixed target domains.…”
Section: Resultsmentioning
confidence: 99%
“…Within the DASiS context, different approaches have been proposed [83,166]. Isobe et al [83] propose a method that trains an expert model for every target domain where the models are encouraged to collaborate via style transfer.…”
Section: Multi-target Dasismentioning
confidence: 99%
“…Such expert models are further exploited as teachers for a student model that learns to imitate their output and serves as regularizer to bring the different experts closer to each other in the learned feature space. Instead, Saporta et al [166] propose to combine for each target domain T i two adversarial pipelines: one that learn to discriminate between the domain T i and the source, and one between T i and the union of other target domains. Then, to reduce the instability that the multi-discriminator model training might cause, they propose a multi-target knowledge transfer by adopting a multi-teacher/single-student distillation mechanism, which leads to a model that is agnostic to the target domains.…”
Section: Multi-target Dasismentioning
confidence: 99%
“…Recently, multi-target domain adaptation (MTDA) methods [11,18,40,43,48,58] have been proposed, which enables a single model to adapt a labeled source domain to multiple unlabeled target domains. Most of works train multiple STDA models and then distill the knowledge into a single multi-target domain adaptation network.…”
Section: Introductionmentioning
confidence: 99%