“…In such technique, a model is pre-trained to perform a given pretext task, for example puzzle-solving (Noroozi & Favaro, 2016;Taleb et al, 2021), rotation prediction Gidaris et al, 2018), colorization (Zhang et al, 2016) or contrastive-based instance discrimination (Hjelm et al, 2018;Chen et al, 2020;He et al, 2020), and then fine-tuned with a small set of labeled examples. Among these self-supervised methods, contrastive learning has become a prevailing strategy for pre-training medical image segmentation models (Chaitanya et al, 2020;Zeng et al, 2021;Peng et al, 2021b). The core idea of this strategy is to learn, without pixel-wise annotations, an image representation which can discriminate related images (e.g., two transformations of the same image) from non-related ones.…”