2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00421
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Abstract: The crux of label-efficient semantic segmentation is to produce high-quality pseudo-labels to leverage a large amount of unlabeled or weakly labeled data. A common practice is to select the highly confident predictions as the pseudo-ground-truths for each pixel, but it leads to a problem that most pixels may be left unused due to their unreliability. However, we argue that every pixel matters to the model training, even those unreliable and ambiguous pixels. Intuitively, an unreliable prediction may get confus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
111
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 264 publications
(112 citation statements)
references
References 116 publications
1
111
0
Order By: Relevance
“…This idea is extended later to semi-supervised semantic segmentation, which trains the student model with high-confident hard pseudo-labels predicted by the teacher. On this basis, extensive attempts improve semi-supervised semantic segmentation by CutMix augmentation [18], class-balanced training [80,30,23] and contrastive learning [80,1,40,64]. A closely relevant topic to self-training in SSL is consistency regularization, which believes that enforcing semantic or distribution consistency between various perturbations, such as image augmentation [32] and network perturbation [72], can improve the robustness and generalization of the model.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…This idea is extended later to semi-supervised semantic segmentation, which trains the student model with high-confident hard pseudo-labels predicted by the teacher. On this basis, extensive attempts improve semi-supervised semantic segmentation by CutMix augmentation [18], class-balanced training [80,30,23] and contrastive learning [80,1,40,64]. A closely relevant topic to self-training in SSL is consistency regularization, which believes that enforcing semantic or distribution consistency between various perturbations, such as image augmentation [32] and network perturbation [72], can improve the robustness and generalization of the model.…”
Section: Related Workmentioning
confidence: 99%
“…Self-training provides a unified solution and achieves state-of-the-art performance on both settings [29,64]. One of the most common and widely used forms of self-training in semantic segmentation is a variant of mean teacher, which is shown in Fig.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to collecting ordinary labels in PL, it would be less laborious for collecting complementary labels in NL [10]. Therefore, NL can not only be easily combined with ordinary classification [5,10], but also assist various vision applications, e.g., [12] dealing with noisy labels by applying NL, [35] using unreliable pixels for semantic segmentation with NL, etc. In this paper, we attempt to leverage NL to augment the few-shot labeled set by predicting negative pseudo-labels from unlabeled data, and thus obtain more accurate pseudo labels to assist classifier modeling under label-constrained scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, their method is divided into two stages, first using contrastive learning to pre-train the backbone network, and second stage adding segmentation head to the backbone network to calculate pixel-level crossentropy loss. Inspired by Wang et al [38] and Liu et al [31], our method employs a fine-grained pixel-level instance discrimination task. We select pixel-level sample features for each class of seismic facies, the features of each class of seismic facies and the central feature of the current class form a positive sample pair, and the features of other classes can be regarded as negative samples of the current class.…”
mentioning
confidence: 99%