2021
DOI: 10.1016/j.media.2021.102146
|View full text |Cite
|
Sign up to set email alerts
|

Self-paced and self-consistent co-training for semi-supervised image segmentation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 50 publications
(13 citation statements)
references
References 33 publications
0
13
0
Order By: Relevance
“…To improve the reliability of pseudo labels, one can adopt an iterative training procedure, which distills previously learned knowledge into a neural network with equal or larger capacity to boost model performance on label estimation (Xie et al, 2019b;Zoph et al, 2020). Also, it is interesting to introduce a pseudo label assessment module to select high quality pseudo labels for more effective uncertainty-aware consistency regularization (Xia et al, 2020a,b;Liu and Tan, 2021;Wang et al, 2021a). We will explore these extensions in future work.…”
Section: Supervised Learning With Extremely Low Data Settingsmentioning
confidence: 99%
“…To improve the reliability of pseudo labels, one can adopt an iterative training procedure, which distills previously learned knowledge into a neural network with equal or larger capacity to boost model performance on label estimation (Xie et al, 2019b;Zoph et al, 2020). Also, it is interesting to introduce a pseudo label assessment module to select high quality pseudo labels for more effective uncertainty-aware consistency regularization (Xia et al, 2020a,b;Liu and Tan, 2021;Wang et al, 2021a). We will explore these extensions in future work.…”
Section: Supervised Learning With Extremely Low Data Settingsmentioning
confidence: 99%
“…For example, the teacher-student strategy has been employed in model compression [1], to distil knowledge from multi-modal to monomodal segmentation networks [16], or in domain adaptation [40]. Semi-supervised segmentation has also benefited from teacher-student architectures [10,34,39]. In these approaches, however, the segmentation loss evaluating the consistency between the teacher and student models is computed on the unannotated data.…”
Section: Related Workmentioning
confidence: 99%
“…To overcome the challenge of multi-stage optimization, one potential end-to-end training solution is collaborative training (co-training). As a common method in semi-supervised learning that jointly uses labeled and unlabeled data to improve the generalization of the model through collaboration among multiple learners [23], it has been broadly applied to image classification [24], target recognition [25] and image segmentation [26]. For medical image analysis, co-training has been adopted for semi-supervised leaning [27,28] or consistency learning in different views [25,29].…”
Section: Introductionmentioning
confidence: 99%