2022
DOI: 10.1038/s41551-022-00936-9
|View full text |Cite
|
Sign up to set email alerts
|

Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning

Abstract: In tasks involving the interpretation of medical images, suitably trained machine-learning models often exceed the performance of medical experts. Yet such a high-level of performance typically requires that the models be trained with relevant datasets that have been painstakingly annotated by experts. Here we show that a self-supervised model trained on chest X-ray images that lack explicit annotations performs pathology-classification tasks with accuracies comparable to those of radiologists. On an external … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
58
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 149 publications
(60 citation statements)
references
References 42 publications
2
58
0
Order By: Relevance
“…Unlike CNN-based models, large language models or multimodal models have been developed more recently. Publications using text data or multimodal data have been steadily increasing, and their maturity is improving 25,26 .…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike CNN-based models, large language models or multimodal models have been developed more recently. Publications using text data or multimodal data have been steadily increasing, and their maturity is improving 25,26 .…”
Section: Discussionmentioning
confidence: 99%
“…Unlike CNN-based models, large language models or multimodal models have been developed more recently. Publications using text data or multimodal data have been steadily increasing, and their maturity is improving 25,26 . Readily available CNN algorithms and large imaging data repositories enabled radiology and other image-based specialties such as ophthalmology, gastroenterology, oncology, and cardiology to generate a huge growth of mature model publication.…”
mentioning
confidence: 99%
“…For instance, within months after its release, GPT-3 powered more than 300 apps across various industries 42 . As a promising early example of a medical foundation model, CheXzero can be applied to detect dozens of diseases in chest X-rays without being trained on explicit labels for these diseases 9 . Likewise, the shift towards GMAI will drive the development and release of large-scale medical AI models with broad capabilities, which will form the basis for various downstream clinical applications.…”
Section: Adaptabilitymentioning
confidence: 99%
“…Although there have been early efforts to develop medical foundation models [8][9][10][11] , this shift has not yet widely permeated medical AI, owing to the difficulty of accessing large, diverse medical datasets, the complexity of the medical domain and the recency of this development. Instead, medical AI models are largely still developed with a task-specific approach to model development.…”
mentioning
confidence: 99%
“…For example, "tumor" was converted to "An H&E image of tumor". As a natural comparison on the zero-shot task, we compared PLIP with the original CLIP model, which has been frequently used for other medical image tasks [31][32][33] and has already been trained from other medical images. Our analysis showed that PLIP consistently outperformed the baseline CLIP model and the results from predicting the majority class (or Majority) (Figure 2c).…”
Section: Plip Can Perform Zero-shot Classification On New Datamentioning
confidence: 99%