2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00295
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking Ensemble-Distillation for Semantic Segmentation Based Unsupervised Domain Adaption

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 37 publications
0
4
0
Order By: Relevance
“…Tables 1 and 2 compare the results of ELDA against multiple baselines. Please note that these baselines do not include works that resort to ensemble distillation methods [2,36,37] or transformer based architectures [13] for fair comparisons. We also include the performance of ELDA trained solely in the source domains, denoted as source only, for reference.…”
Section: Quantitative Results On the Benchmarksmentioning
confidence: 99%
See 1 more Smart Citation
“…Tables 1 and 2 compare the results of ELDA against multiple baselines. Please note that these baselines do not include works that resort to ensemble distillation methods [2,36,37] or transformer based architectures [13] for fair comparisons. We also include the performance of ELDA trained solely in the source domains, denoted as source only, for reference.…”
Section: Quantitative Results On the Benchmarksmentioning
confidence: 99%
“…ELDA utilizes edges as the domain invariant information by incorporating edge extraction into its training process as an auxiliary task. The experimental results show that without resorting to ensemble distillation methods [2,36,37] or transformer based architectures [13], ELDA is able to achieve the state-of-the-art performance on two commonly adopted benchmarks [7,23,24]. The contributions of this work are summarized as follows:…”
Section: Introductionmentioning
confidence: 99%
“…Chao et al [ 18 ] assumes the existence of a set of semantic segmentation models independently pre-trained according to some UDA technique. Then, the pseudo-label confidences coming from such models are unified, fused, and finally distilled into a student model.…”
Section: Related Workmentioning
confidence: 99%
“…It is well-known that training deep models on synthetic images for performing on real-world ones requires domain adaptation [ 8 , 9 ], which must be unsupervised if we have no labels from real-world images [ 10 ]. Thus, this paper falls into the realm of unsupervised domain adaptation (UDA) for semantic segmentation [ 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ], i.e., in contrast to assuming access to labels from the target domain [ 23 , 24 ]. Note that the great relevance of UDA in this context comes from the fact that, until now, pixel-level semantic image segmentation labels are obtained by cumbersome and error-prone manual work.…”
Section: Introductionmentioning
confidence: 99%