2022
DOI: 10.1109/tnnls.2021.3066850
|View full text |Cite
|
Sign up to set email alerts
|

Semisupervised Semantic Segmentation by Improving Prediction Confidence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(14 citation statements)
references
References 57 publications
0
14
0
Order By: Relevance
“…Semi-supervised learning (SSL) involves two typical paradigms: consistency regularization [3,48,73] and entropy minimization [5,6,27,49]. Consistency regularization forces the model to produce stable and consistent predictions on the same unlabeled data under various perturbations [71].…”
Section: Related Workmentioning
confidence: 99%
“…Semi-supervised learning (SSL) involves two typical paradigms: consistency regularization [3,48,73] and entropy minimization [5,6,27,49]. Consistency regularization forces the model to produce stable and consistent predictions on the same unlabeled data under various perturbations [71].…”
Section: Related Workmentioning
confidence: 99%
“…) 2 , and 3 = ∑ , 3 ( ( , )) 3 when the input is added with random noise for times ( is a constant integer). This smoothing strategy is intuitive but still rough for some complex CNN structures.…”
Section: Gradient-based Class Activation Mappingmentioning
confidence: 99%
“…Convolutional neural networks (CNNs) have provided a basis for numerous remarkable achievements in various computer vision tasks like, for example, image classification [9,8,27,13], object detection [19,43,1,12], and semantic segmentation [14,11,3]. Despite CNNs' extraordinary performance, they still lack a clear interpretation of the inner mechanism [10,42,21].…”
Section: Introductionmentioning
confidence: 99%
“…Existing supervised approaches rely on large-scale annotated data, which can be too costly to acquire in practice. To alleviate this problem, many attempts [1,4,9,15,21,33,43,48] have been made towards semi-supervised semantic segmentation, which learns a model with only a few labeled samples and numerous unlabeled ones. Under such a setting, how to adequately leverage the unlabeled data becomes critical.…”
Section: Iou Reliable Unreliablementioning
confidence: 99%
“…Semi-Supervised Learning has two typical paradigms: consistency regularization [3,15,33,36,42] and entropy minimization [4,16]. Recently, a more intuitive but effective framework: self-training [27], has become the mainstream.…”
Section: Related Workmentioning
confidence: 99%