2022
DOI: 10.2196/32875
|View full text |Cite
|
Sign up to set email alerts
|

Operationalizing and Implementing Pretrained, Large Artificial Intelligence Linguistic Models in the US Health Care System: Outlook of Generative Pretrained Transformer 3 (GPT-3) as a Service Model

Abstract: Generative pretrained transformer models have been popular recently due to their enhanced capabilities and performance. In contrast to many existing artificial intelligence models, generative pretrained transformer models can perform with very limited training data. Generative pretrained transformer 3 (GPT-3) is one of the latest releases in this pipeline, demonstrating human-like logical and intellectual responses to prompts. Some examples include writing essays, answering complex questions, matching pronouns… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
75
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

3
7

Authors

Journals

citations
Cited by 96 publications
(76 citation statements)
references
References 15 publications
1
75
0
Order By: Relevance
“…Several challenges may be encountered when considering the implementation of similar NLP-supported tools [ 33 ]. Prior to implementation, a health institution must ensure data privacy and integrity, consider the necessities of information system infrastructure, model, and system performance, as well as performing assessment for algorithmic bias [ 33 - 35 ]. From a provider standpoint, as many institutions are working on reducing provider alert burden [ 36 ], they should be cautious toward implementing such tools not to increase provider alerting, which has been associated with provider burnout.…”
Section: Discussionmentioning
confidence: 99%
“…Several challenges may be encountered when considering the implementation of similar NLP-supported tools [ 33 ]. Prior to implementation, a health institution must ensure data privacy and integrity, consider the necessities of information system infrastructure, model, and system performance, as well as performing assessment for algorithmic bias [ 33 - 35 ]. From a provider standpoint, as many institutions are working on reducing provider alert burden [ 36 ], they should be cautious toward implementing such tools not to increase provider alerting, which has been associated with provider burnout.…”
Section: Discussionmentioning
confidence: 99%
“…The GPT-3 is a powerful artificial intelligence that has a number of potential applications (Sezgin et al, 2022). It is capable of learning from a large dataset of text and making predictions about new text.…”
Section: Negative Sides Of Gpt-3mentioning
confidence: 99%
“…With our findings, a natural idea is to use sensitivity as a signal to abstain from making predictions on examples that are likely to have wrong predictions-an important mechanism to increase user trust when deploying ICL models into realworld, especially in high-stakes domains such as medical (Korngiebel and Mooney, 2021;Sezgin et al, 2022) and legal (Eliot and Lance, 2021). Unlike the fully supervised setting, training an abstention predictor is impossible in the few-shot scenario as only few labeled examples are available.…”
Section: Introductionmentioning
confidence: 99%