2022
DOI: 10.48550/arxiv.2203.03884
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Abstract: The crux of semi-supervised semantic segmentation is to assign adequate pseudo-labels to the pixels of unlabeled images. A common practice is to select the highly confident predictions as the pseudo ground-truth, but it leads to a problem that most pixels may be left unused due to their unreliability. We argue that every pixel matters to the model training, even its prediction is ambiguous. Intuitively, an unreliable prediction may get confused among the top classes (i.e., those with the highest probabilities)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 34 publications
0
14
0
Order By: Relevance
“…CPS [6] designs a mutual learning mechanism that trains two student models with pseudo labels from each other. U2PL [47] proposes to utilize unreliable pixels by negative learning in a contrastive learning manner to make full use of the unlabeled pixels. Yi et al [53] propose to use label propagation to refine the pseudo labels.…”
Section: B Semi-supervised Semantic Segmentationmentioning
confidence: 99%
See 4 more Smart Citations
“…CPS [6] designs a mutual learning mechanism that trains two student models with pseudo labels from each other. U2PL [47] proposes to utilize unreliable pixels by negative learning in a contrastive learning manner to make full use of the unlabeled pixels. Yi et al [53] propose to use label propagation to refine the pseudo labels.…”
Section: B Semi-supervised Semantic Segmentationmentioning
confidence: 99%
“…It is worth noting that our adaptive weight is different from the weights computed by top-1 confidence used to filter out low-confidence pixels [9], [21], [36]. Those weights are small for pixels with low top-1 probability, which results in those pixels not being sufficiently used in training [47]. But our weight is only small when the prediction of a pixel is confused in the top-(K+1) categories, thus our model still uses the information that its prediction should not belong to the other C-K-1 categories.…”
Section: Kusmentioning
confidence: 99%
See 3 more Smart Citations