2022
DOI: 10.1117/1.jmi.9.6.064503
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive self-supervised learning from 100 million medical images with optional supervision

Abstract: .PurposeBuilding accurate and robust artificial intelligence systems for medical image assessment requires the creation of large sets of annotated training examples. However, constructing such datasets is very costly due to the complex nature of annotation tasks, which often require expert knowledge (e.g., a radiologist). To counter this limitation, we propose a method to learn from medical images at scale in a self-supervised way.ApproachOur approach, based on contrastive learning and online feature clusterin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 45 publications
0
11
0
Order By: Relevance
“…In this study, we demonstrated that our foundation model trained using self-supervised learning, provided robust quantitative biomarkers for predicting anatomical site, malignancy, and prognosis across three different use cases in four cohorts. Several studies [19][20][21] have demonstrated the efficacy of self-supervised learning in medicine where only limited data might be available for training deep learning networks. Our findings complement and extend this for identifying reliable imaging biomarkers for cancer-associated use cases.…”
Section: Discussionmentioning
confidence: 99%
“…In this study, we demonstrated that our foundation model trained using self-supervised learning, provided robust quantitative biomarkers for predicting anatomical site, malignancy, and prognosis across three different use cases in four cohorts. Several studies [19][20][21] have demonstrated the efficacy of self-supervised learning in medicine where only limited data might be available for training deep learning networks. Our findings complement and extend this for identifying reliable imaging biomarkers for cancer-associated use cases.…”
Section: Discussionmentioning
confidence: 99%
“…For our task, we had annotated data and formulated pretraining and a supervision task. However, if more data without annotations are available, selfsupervised methods deserve exploration [31].…”
Section: A CCC Detectionmentioning
confidence: 99%
“…Due to the higher precision of the input data, the 7/15 inbuilt PyTorch transforms are not directly used and must be re-implemented to fit the data format. Augmentations such as random intensity scaling and horizontal flipping are used along with a custom normalization of the DICOM images which includes histogram equalization and dynamic window scaling 27 .…”
Section: Implementation Detailsmentioning
confidence: 99%