2020
DOI: 10.1016/j.patcog.2020.107269
|View full text |Cite
|
Sign up to set email alerts
|

Deep co-training for semi-supervised image segmentation

Abstract: In this paper, we aim to improve the performance of semantic image segmentation in a semi-supervised setting where training is performed with a reduced set of annotated images and additional non-annotated images. We present a method based on an ensemble of deep segmentation models. Models are trained on subsets of the annotated data and use non-annotated images to exchange information with each other, similar to co-training. Diversity across models is enforced with the use of adversarial samples. We demonstrat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
70
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 166 publications
(70 citation statements)
references
References 55 publications
0
70
0
Order By: Relevance
“…In this context, Ouali et al (2020) train multiple auxiliary decoders on unlabelled data by enforcing consistency between the class predictions of the main and the auxiliary decoders. Similarly, in (Peng et al, 2020) two segmentation networks are trained via supervision on two disjunct datasets and additionally, by applying a co-learning scheme in which consistent predictions of both networks on unlabelled data are enforced. Another approach based on consensus training is presented by Li et al (2018) and Zhang et al (2020), who use unlabelled data in order to train a segmentation network by encouraging consistent predictions for the same input under different geometric transformations.…”
Section: Semi-supervised Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…In this context, Ouali et al (2020) train multiple auxiliary decoders on unlabelled data by enforcing consistency between the class predictions of the main and the auxiliary decoders. Similarly, in (Peng et al, 2020) two segmentation networks are trained via supervision on two disjunct datasets and additionally, by applying a co-learning scheme in which consistent predictions of both networks on unlabelled data are enforced. Another approach based on consensus training is presented by Li et al (2018) and Zhang et al (2020), who use unlabelled data in order to train a segmentation network by encouraging consistent predictions for the same input under different geometric transformations.…”
Section: Semi-supervised Segmentationmentioning
confidence: 99%
“…In order to ensure diversity between both decoders, a perturbed version z of the latent representation z with z = F (z), using a perturbation function F (•), is fed to the auxiliary decoder while the uncorrupted representation z is used as input for the main decoder. This procedure of consensus regularisation for semi-supervised segmentation is founded on the rationale that the shared encoder's representation can be enhanced by using the additional training signal obtained from the unlabelled data, acting as additional regularisation on the encoder (Ouali et al, 2020;Peng et al, 2020). Based on the consensus principle (Chao and Sun, 2016), enforcing an agreement between the predictions of multiple decoder branches restricts the parameter search space to cross-consistent solutions and thus, improves the generalisation of the different models.…”
Section: Semi-supervision Using Consensus Regularisationmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, it is essential to ensure the different base learners giving different and complementary information about each instance [173], namely, view difference constraint or diversity criterion. Peng et al [174] applied the idea of co-training to semi-supervised segmentation of medical images. Concretely, they trained multiple models on different subsets of the labeled training data and used a common set of unlabeled training images to exchange information with each other.…”
Section: F Co-trainingmentioning
confidence: 99%
“…During the last decades, a large family semi‐supervised learning methods have been proposed [8]. For instance, co‐training is an important semi‐supervised learning method, which assumes that data contains multiple conditionally independent feature subsets and data distribution is compatible with the target function of different feature subsets [7, 10, 11]. Kamal et al [12] performed the extensive empirical experiment to compare the co‐training with generative mixture models and expectation maximization (EM) demonstrated that co‐training performed well if the conditional independence assumption indeed holds.…”
Section: Introductionmentioning
confidence: 99%