2022
DOI: 10.1148/ryai.220119
|View full text |Cite
|
Sign up to set email alerts
|

On the Opportunities and Risks of Foundation Models for Natural Language Processing in Radiology

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 4 publications
0
12
0
Order By: Relevance
“…NLP has received considerable media attention in the past months due to the release of large DL models called foundation models 5 . These models can be repurposed for various tasks, such as generating text or images, after being trained on a wide range of unlabelled data 6 7 . A prominent example of a foundation model is Generative Pre-trained Transformer 3 (GPT-3), a large language model (LLM) that generates human-like text.…”
Section: Introductionmentioning
confidence: 99%
“…NLP has received considerable media attention in the past months due to the release of large DL models called foundation models 5 . These models can be repurposed for various tasks, such as generating text or images, after being trained on a wide range of unlabelled data 6 7 . A prominent example of a foundation model is Generative Pre-trained Transformer 3 (GPT-3), a large language model (LLM) that generates human-like text.…”
Section: Introductionmentioning
confidence: 99%
“… Foundation AI models were initially introduced in natural language processing in 2018, but have since been applied in computer vision and robotics. 10 – 12 Visual transformers revolutionized how AI models approach sequential data and images by allowing complex predictions in smaller datasets. They are enabled by transfer learning and scale.…”
Section: Reviewmentioning
confidence: 99%
“…6 –9 While improving model performance was limited over the past decade to optimizing architectures and scaling up the training datasets, both strategies are particularly challenging in epilepsy surgery where cohorts are typically small. Foundation AI models were initially introduced in natural language processing in 2018, but have since been applied in computer vision and robotics. 10 –12 Visual transformers revolutionized how AI models approach sequential data and images by allowing complex predictions in smaller datasets. They are enabled by transfer learning and scale.…”
Section: Reviewmentioning
confidence: 99%
“…Foundation models have presented impressive zero-shot and few-shot generalizations [22], [23]. Such progresses have been extended for training multi-modal fundamental models like CLIP [24].…”
Section: Segment Anything Modelmentioning
confidence: 99%