2021
DOI: 10.48550/arxiv.2103.04813
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Boosting Semi-supervised Image Segmentation with Global and Local Mutual Information Regularization

Jizong Peng,
Marco Pedersoli,
Christian Desrosiers

Abstract: The scarcity of labeled data often impedes the application of deep learning to the segmentation of medical images. Semi-supervised learning seeks to overcome this limitation by leveraging unlabeled examples in the learning process. In this paper, we present a novel semi-supervised segmentation method that leverages mutual information (MI) on categorical distributions to achieve both global representation invariance and local smoothness. In this method, we maximize the MI for intermediate feature embeddings tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 40 publications
0
12
0
Order By: Relevance
“…As seen in the table, our approach uses a low number of labelled samples and produces the best results in terms of DSC and HD, whereas methods like [2] use double the amount of labelled data, yet achieve inferior performance as compared to our results, proving the efficiency and robustness of our model. Only Peng et al [7] uses a lower number of labelled data, but produces poorer results as well. Our method produces a reasonably better result by making use of only 10% of the labelled data available for training.…”
Section: Results On the Acdc 2017 Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…As seen in the table, our approach uses a low number of labelled samples and produces the best results in terms of DSC and HD, whereas methods like [2] use double the amount of labelled data, yet achieve inferior performance as compared to our results, proving the efficiency and robustness of our model. Only Peng et al [7] uses a lower number of labelled data, but produces poorer results as well. Our method produces a reasonably better result by making use of only 10% of the labelled data available for training.…”
Section: Results On the Acdc 2017 Datasetmentioning
confidence: 99%
“…All the other parameters were the same as those methods, to maintain fair comparison. Peng et al [7] use the least number of labelled samples in their method, but they have produced vastly substandard results. Whereas, our method obtains better results as compared to most state-of-the-art methods, by using 40% labelled data in the entire training set.…”
Section: Results On the Mmwhs Datasetmentioning
confidence: 99%
“…(c) Semi-supervised local contrastive learning: Recently, some concurrent works [25], [26], [47], [48], [49], [50], [51] have devised semi-supervised learning frameworks that use unlabeled images with some variant of contrastive loss setup. The works relevant to presented work are [25], [26].…”
Section: Related Workmentioning
confidence: 99%
“…Reg. [44], MSE [44], KL [44] and Peng et al 's method [23]. We also list the fully-supervised setting as a reference for the upper bound of semi-supervised segmentation methods.…”
Section: F Acdc Segmentationmentioning
confidence: 99%
“…Since all these methods are evaluated under the same setting, we directly report their performance presented in their literature. Note that, since the number of slices along z-axis is sometimes limited (i.e., less than 10), thus making 3D convolutions hard to perform, above methods usually adopted the 2D setting [53] [44] [23]. For fair comparison, we follow their 2D setting.…”
Section: F Acdc Segmentationmentioning
confidence: 99%