2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00043
|View full text |Cite
|
Sign up to set email alerts
|

Cleaning Noisy Labels by Negative Ensemble Learning for Source-Free Unsupervised Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 30 publications
(12 citation statements)
references
References 35 publications
0
12
0
Order By: Relevance
“…Source-free Domain Adaptation (SFDA) aims to adapt a model to the unlabelled target domain when only the source model is available and the source data set is absent during target adaptation. Existing SFDA methods use pseudolabel refinement [35,1,5], latent source feature generation using variational inference [74], or disparity among an ensemble of classifiers [31]. Certain SFDA methods resort to ad hoc source training protocols to enable the source model to be adapted on the target data.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Source-free Domain Adaptation (SFDA) aims to adapt a model to the unlabelled target domain when only the source model is available and the source data set is absent during target adaptation. Existing SFDA methods use pseudolabel refinement [35,1,5], latent source feature generation using variational inference [74], or disparity among an ensemble of classifiers [31]. Certain SFDA methods resort to ad hoc source training protocols to enable the source model to be adapted on the target data.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, our proposed U-SFAN does not require specialized source training except a computationally lightweight approximate inference, which can be done with a single pass of the source data during the source training. Moreover, unlike [31,1], our U-SFAN works well on both closed-set and open-set SFDA without ad hoc modifications.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…where c k denotes the centroid of k-th class, δ k denotes the k-th element in the soft-max operation. Some recent works [51], [94] argue that using only one prototype does not fully characterize the class, so multiple prototypes are generated in each class. In general, the aim of this step is to attain a class prototype p k that can represent the distribution of each category in the target domain.…”
Section: Pseudo Labelingmentioning
confidence: 99%
“…5. Therefore, the false label filtering [56], [87], [100] and the noisy label learning [94], [101], [102] are two important directions to improve the accuracy of pseudo labels. False label filtering is mainly to reject unreliable pseudo-labels by designing some reasonable mechanism.…”
Section: Pseudo Labelingmentioning
confidence: 99%