2023
DOI: 10.1101/2023.04.18.23288752
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Faithful AI in Medicine: A Systematic Review with Large Language Models and Beyond

Abstract: Artificial intelligence (AI) holds great promise in healthcare and medicine on being able to help with many aspects, from biological scientific discovery, to clinical patient care, to public health policy making. However, the potential risk of AI methods for generating factually incorrect or unfaithful information is a big concern, which could result in serious consequences. This review aims to provide a comprehensive overview of the faithfulness problem in existing research on AI in healthcare and medicine, i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 87 publications
0
5
0
Order By: Relevance
“…Most initial publications on the evaluation of these GenAI solutions center on evaluating unregulated summarization and translation tools 51,64 . Some other studies have focused on evaluating experimental conversational GenAI solutions, which aim to address the accuracy and contextual relevance of the information provided to patients and providers 52,65 .…”
Section: Operationalization Of Iso 42001 Certificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Most initial publications on the evaluation of these GenAI solutions center on evaluating unregulated summarization and translation tools 51,64 . Some other studies have focused on evaluating experimental conversational GenAI solutions, which aim to address the accuracy and contextual relevance of the information provided to patients and providers 52,65 .…”
Section: Operationalization Of Iso 42001 Certificationmentioning
confidence: 99%
“…The EU AI Act has extraterritorial reach and imposes numerous obligations on providers, deployers, importers, and distributors of such high-risk healthcare AI systems 29,35,[51][52][53] . The EU AI Act gradually becomes effective, with the obligations specific to high-risk AI systems going into effect 12 months after the EU AI Act enters into force, and violation of such obligations can involve penalties of up to 3% of global annual turnover (or 15 million euros, whichever is larger).…”
mentioning
confidence: 99%
“…Through concerted efforts to enhance the robustness of AI systems and promote transparency, the AI community can work toward mitigating the risks associated with AI hallucinations and safeguarding the integrity of AI applications. Collective insights from researchers such as [3][4][5][6][7][8][9][10][11] underscore the multifaceted nature of this issue and the diverse strategies required to address it. The potential for AI to inadvertently transcribe unsafe content on platforms such as YouTube is a stark reminder of the challenges posed by AI hallucinations.…”
Section: Urgent Strategies For Ensuring the Accuracy And Integrity Of...mentioning
confidence: 99%
“…Several parallel research lines aim to enhance radiology report summarization with a different methodological focus. First, several studies optimize factual consistency through reinforcement learning (Zhang et al, 2020b;Delbrouck et al, 2022) or post-hoc reranking (Xie et al, 2023). Second, Karn et al ( 2022) devise an extract-then-abstract pipeline with multi-agent reinforcement learning.…”
Section: Related Workmentioning
confidence: 99%