2022
DOI: 10.1109/tip.2022.3144036
|View full text |Cite
|
Sign up to set email alerts
|

A Three-Stage Self-Training Framework for Semi-Supervised Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 37 publications
(8 citation statements)
references
References 44 publications
0
6
0
Order By: Relevance
“…The TPLD method [35] uses a twostage self-training framework, first exploring the easier parts of the target domain images, and then processing the more difficult parts. TSST [36] proposes a three-stage self-training framework, which uses multiple stages of multi-task learning strategies to constrain and generate better pseudo-labels. RoadDA [37] uses a similar approach to TPLD to study road segmentation in remote sensing images.…”
Section: Stage-wise Self-trainingmentioning
confidence: 99%
“…The TPLD method [35] uses a twostage self-training framework, first exploring the easier parts of the target domain images, and then processing the more difficult parts. TSST [36] proposes a three-stage self-training framework, which uses multiple stages of multi-task learning strategies to constrain and generate better pseudo-labels. RoadDA [37] uses a similar approach to TPLD to study road segmentation in remote sensing images.…”
Section: Stage-wise Self-trainingmentioning
confidence: 99%
“…Current deep SSL classification methods utilize entropy minimization and consistency regularization to learn the unlabeled data. Entropy minimization based methods enforce the predictions of unlabeled data to be sharp which is commonly realized by training their model via pseudo labels [2], [3], [20], [24], [39], [49]. Consistency regularization based methods enforce their models to predict consistent results for the same image in different perturbations to make the classification boundaries pass through the data sparse region.…”
Section: A Semi-supervised Learningmentioning
confidence: 99%
“…This teacher model assigns pseudo-labels to selected unlabeled nodes and incorporates them to augment the labeled data, after which a student model is trained on the augmented training nodes. Typically, current self-training frameworks introduce multiple stages where the teacher model is iteratively updated with augmented data to generate more confident pseudo-labeled nodes [7,11,29,37]. For example, Self-Training (ST) [16] represents a single-stage self-training framework that initially trains a GCN using given labels, subsequently selects predictions exhibiting the highest confidence, and then proceeds to further train the GCN utilizing the expanded set of labels.…”
Section: Introductionmentioning
confidence: 99%