2023
DOI: 10.1007/s10278-023-00782-4
|View full text |Cite
|
Sign up to set email alerts
|

CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning

Abstract: Training deep learning models on medical images heavily depends on experts’ expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…One group of researchers trained a vision foundation model on 100 million medical images, including radiographs, CT images, MR images, and ultrasound images [ 67 ]. Another group of researchers trained a self-supervised network on 4.8 million chest radiographs [ 68 ]. However, the networks were not scalable despite these studies training their networks on a vast amount of data and demonstrating their diverse utility.…”
Section: Large Language Model (Llm)mentioning
confidence: 99%
“…One group of researchers trained a vision foundation model on 100 million medical images, including radiographs, CT images, MR images, and ultrasound images [ 67 ]. Another group of researchers trained a self-supervised network on 4.8 million chest radiographs [ 68 ]. However, the networks were not scalable despite these studies training their networks on a vast amount of data and demonstrating their diverse utility.…”
Section: Large Language Model (Llm)mentioning
confidence: 99%
“…[4][5][6][7][8][9][10][11][12][13] However, medical image processing faces unique challenges, notably limited data and labeling; consequently, recent efforts have aimed to reduce reliance on data annotation across various medical image types [14][15][16][17]. Studies have predominantly focused on self-supervised methods for X-ray images [18][19][20][21][22][23] with comparatively fewer articles addressing MRI [24][25][26][27] and CT-scan [28][29][30][31][32], and notably fewer on ultrasound images. [33,34] Articles discussing HIFU control and monitoring include [35] presenting a method utilizing ultrasound signals as input to a feedforward neural network for lesion area detection.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Simul-taneously, complex vision-language models like CLIP [16], DALL-E [25], FLAVA [26], and Socratec models [34] were trained in a self-supervised way for understanding correlation in multi-modal data, and enabling reasoning across multiple modalities, and even applied in generation tasks. SSL is also being popular for radiology image analysis [3], [6], [20], [27]; however different from natural images, self-supervision using radiology images needs targeted pretraining using unlabeled medical images thus understanding the effects of various self-supervision strategies are important to plan experiments in advance and reduce unnecessary waste of computation resources and time of training. Although there exists a "gap" in literature since no study exists that benchmark SSL strategies for medical images.…”
Section: Introductionmentioning
confidence: 99%