2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00113
|View full text |Cite
|
Sign up to set email alerts
|

SC-UDA: Style and Content Gaps aware Unsupervised Domain Adaptation for Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 21 publications
0
15
0
Order By: Relevance
“…These strategies can be applied at different feature extraction stages of the object detection model. Data manipulation-based methods directly augment (Prakash et al, 2019;Wang et al, 2021a) or perform style transformations (Yun et al, 2021;Yu et al, 2022) on input data to narrow the distribution gap between the source and target domains. Learning strategy-based methods achieve object detection of target domain by introducing some learning strategies like self-training (Zhao et al, 2020a;Li et al, 2021) and teacher-student networks (He et al, 2022;Li et al, 2022b).…”
Section: Domain Adaptive Object Detectionmentioning
confidence: 99%
“…These strategies can be applied at different feature extraction stages of the object detection model. Data manipulation-based methods directly augment (Prakash et al, 2019;Wang et al, 2021a) or perform style transformations (Yun et al, 2021;Yu et al, 2022) on input data to narrow the distribution gap between the source and target domains. Learning strategy-based methods achieve object detection of target domain by introducing some learning strategies like self-training (Zhao et al, 2020a;Li et al, 2021) and teacher-student networks (He et al, 2022;Li et al, 2022b).…”
Section: Domain Adaptive Object Detectionmentioning
confidence: 99%
“…In view of above problems, recent researchers have focused their efforts on Unsupervised Domain Adaption (UDA) methods [16][17][18][19][20][21][22], which leverages unsupervised transfer learning to alleviate the domain gaps. UDA methods transfer knowledge from the labelrich source domain to the target domain without tedious manual annotations.…”
Section: Introductionmentioning
confidence: 99%
“…To some extent, the quality of pseudo labels are tightly related to the detection precision. To avoid the noise of the pseudo labels, many novel self-training optimizations [13,19,20,22,[30][31][32] are proposed, including knowledge distillation strategy [19], the progressive confidence restriction [13], imbalanced mini-batch sampling strategy [20], and graph representation [22,32]. Although self-training strategy is an efficient way to boost performance, one shortcoming of these methods is that classification confidences are mostly used as the prediction box selection criteria.…”
Section: Introductionmentioning
confidence: 99%
“…Most DA detection models aim to measure the feature distribution distance of different domains and then minimize the discrepancy, therefore adversarial manner is exploited between the feature extractor and domain discriminator (Chen et al, 2018;Saito et al, 2019;Li et al, 2020;Chen et al, 2021). Adversarial feature learning aims to decrease style gaps (e.g., color, texture, illumination) between domains for improving the generalization ability, however, for the challenges of content gaps such as various locations, densities, and distributions, the adversarial feature learning may lead to feature misalignment, which decreases the discriminability of the detector (Jiang et al, 2022;Yu et al, 2022).…”
Section: Introductionmentioning
confidence: 99%