2022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW) 2022
DOI: 10.1109/wacvw54805.2022.00027
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for Semantic Segmentation Without Using Source Domain Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 34 publications
(32 citation statements)
references
References 50 publications
0
32
0
Order By: Relevance
“…This is disadvantageous for continual UDA settings, where it would be desirable to continuously adapt on a single-image basis to avoid algorithmic delay during deployment in rapidly changing domains. In contrast to existing methods [19], [22], [23], [24], [15], our CBNA method is applicable to these continual UDA settings, which we will show by our successful single-image adaptation results without the usage of additional uncorrelated images. Another novelty of CBNA is its integration into the single-image inference forward pass of an already trained model, which introduces nearly no computational overhead during inference.…”
Section: Uda Via Normalization Layersmentioning
confidence: 89%
See 3 more Smart Citations
“…This is disadvantageous for continual UDA settings, where it would be desirable to continuously adapt on a single-image basis to avoid algorithmic delay during deployment in rapidly changing domains. In contrast to existing methods [19], [22], [23], [24], [15], our CBNA method is applicable to these continual UDA settings, which we will show by our successful single-image adaptation results without the usage of additional uncorrelated images. Another novelty of CBNA is its integration into the single-image inference forward pass of an already trained model, which introduces nearly no computational overhead during inference.…”
Section: Uda Via Normalization Layersmentioning
confidence: 89%
“…The initial works of Li et al [22], [23] for image classification and Zhang et al [15] for semantic segmentation show that the re-estimation of batch normalization (BN) statistics in the target domain can be used for UDA without source data. The UBNA method from Klingner et al [19] has shown that mixing statistics from the source and target domain outperforms these initial works, which we build upon for our method design. These findings for domain adaptation are also supported by the work on adversarial robustness of Schneider et al [24], where the beneficial effect of mixing statistics from perturbed and clean images is shown.…”
Section: Uda Via Normalization Layersmentioning
confidence: 99%
See 2 more Smart Citations
“…Subsequent works adapted this concept and introduced improvements such as dilated convolutions [7,62], atrous spatial pyramid pooling [9], state-of-the-art feature extractors [8,45], label relaxation [68], or high resolution networks [57]. Other approaches explore unsupervised domain adaptation techniques [32,27,58,61] to mitigate the domain shift problem for semantic segmentation and thereby facilitate the use of readily available synthetic datasets [50,53]. Moreover, computational complexity can be reduced tremendously with only marginal loss in performance due to the use of efficient architectures.…”
Section: Related Workmentioning
confidence: 99%