2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00685
|View full text |Cite
|
Sign up to set email alerts
|

Re-distributing Biased Pseudo Labels for Semi-supervised Semantic Segmentation: A Baseline Investigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 109 publications
(53 citation statements)
references
References 28 publications
0
53
0
Order By: Relevance
“…Consequently, the performance of the minority classes significantly degrades due to insufficient training samples. One can solve this during the training process through class re-balancing [71] or augmentation [16]. In contrast, we show that active learning can implicitly solve this by iteratively selecting a batch of samples to annotate at the data collection stage.…”
Section: Region Impurity and Prediction Uncertaintymentioning
confidence: 91%
“…Consequently, the performance of the minority classes significantly degrades due to insufficient training samples. One can solve this during the training process through class re-balancing [71] or augmentation [16]. In contrast, we show that active learning can implicitly solve this by iteratively selecting a batch of samples to annotate at the data collection stage.…”
Section: Region Impurity and Prediction Uncertaintymentioning
confidence: 91%
“…For example, CReST [64] selects pseudo labels more frequently for minority classes according to the estimated class distribution. DARS [20] employs adaptive threshold to select more pseudo labels for minority class during selftraining. However, these methods tends to generate noisy pseudo labels from class-biased segmentation of unlabeled data.…”
Section: Class-imbalance Learningmentioning
confidence: 99%
“…Given images X l ⊂ R H×W ×3 with pixel-level semantic labels ŷ ⊂ (1, C) H×W and unlabelled images X u ⊂ R H×W ×3 (H, W and C denote image height, image width and class number, respectively), the goal is to learn a segmentation model F that can fit both labelled and unlabelled data and work well on unseen images. Existing methods [10,20,29,31,35,43,45,47,73] combine supervised learning on labelled images and unsupervised learning on unlabelled image to tackle the semi-supervised challenge. For labelled images, they adopt cross entropy loss as supervised loss L s to train F .…”
Section: Problem Definitionmentioning
confidence: 99%
See 2 more Smart Citations