2021
DOI: 10.48550/arxiv.2106.09157
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Positional Contrastive Learning for Volumetric Medical Image Segmentation

Abstract: The success of deep learning heavily depends on the availability of large labeled training sets. However, it is hard to get large labeled datasets in medical image domain because of the strict privacy concern and costly labeling efforts. Contrastive learning, an unsupervised learning technique, has been proved powerful in learning imagelevel representations from unlabeled data. The learned encoder can then be transferred or fine-tuned to improve the performance of downstream tasks with limited labels. A critic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 23 publications
0
11
0
Order By: Relevance
“…In such technique, a model is pre-trained to perform a given pretext task, for example puzzle-solving (Noroozi & Favaro, 2016;Taleb et al, 2021), rotation prediction Gidaris et al, 2018), colorization (Zhang et al, 2016) or contrastive-based instance discrimination (Hjelm et al, 2018;Chen et al, 2020;He et al, 2020), and then fine-tuned with a small set of labeled examples. Among these self-supervised methods, contrastive learning has become a prevailing strategy for pre-training medical image segmentation models (Chaitanya et al, 2020;Zeng et al, 2021;Peng et al, 2021b). The core idea of this strategy is to learn, without pixel-wise annotations, an image representation which can discriminate related images (e.g., two transformations of the same image) from non-related ones.…”
Section: Introductionmentioning
confidence: 99%
“…In such technique, a model is pre-trained to perform a given pretext task, for example puzzle-solving (Noroozi & Favaro, 2016;Taleb et al, 2021), rotation prediction Gidaris et al, 2018), colorization (Zhang et al, 2016) or contrastive-based instance discrimination (Hjelm et al, 2018;Chen et al, 2020;He et al, 2020), and then fine-tuned with a small set of labeled examples. Among these self-supervised methods, contrastive learning has become a prevailing strategy for pre-training medical image segmentation models (Chaitanya et al, 2020;Zeng et al, 2021;Peng et al, 2021b). The core idea of this strategy is to learn, without pixel-wise annotations, an image representation which can discriminate related images (e.g., two transformations of the same image) from non-related ones.…”
Section: Introductionmentioning
confidence: 99%
“…The deficiency of labels makes supervised FL impractical. Self-supervised learning can address this challenge by pre-training a neural network encoder with unlabeled data, followed by fine-tuning for a downstream task with limited labels (Zeng et al, 2021). Contrastive learning (CL), an effective self-supervised learning approach (Chen et al, 2020a), can learn data representations from unlabeled data to improve the model.…”
Section: Introductionmentioning
confidence: 99%
“…Despite there are some recent works such as MoCo-CXR [13] and MedAug [14] that attempt to apply contrastive learning on large-scaled CXR dataset to improve representations for CXR interpretation, they only tested on the disease classification task, the performance of their methods on image segmentation task is still unknown. In addition, there exist some works that use contrastive learning to improve the volumetric image segmentation model [17], [18], their learning approaches depend on the characteristics of the 3D image, thus can not be applied to CXR directly.…”
Section: Introductionmentioning
confidence: 99%
“…[17] proposed a global and local contrastive learning framework for volumetric medical image segmentation with limited annotation. PCL [18] improved the global contrastive learning by introducing the position of the 2D slices in the volumetric image to select contrastive pairs. Pgl [28] proposed a prior-guided self-supervised model to learn the region-wise local consistency in the latent feature space for segmentation tasks.…”
Section: Introductionmentioning
confidence: 99%