2023
DOI: 10.1007/s11604-023-01474-3
|View full text |Cite
|
Sign up to set email alerts
|

Fairness of artificial intelligence in healthcare: review and recommendations

Abstract: In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
29
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
2

Relationship

4
6

Authors

Journals

citations
Cited by 117 publications
(29 citation statements)
references
References 125 publications
0
29
0
Order By: Relevance
“…Transparency ensures an understanding of the model’s knowledge, context, and limitations, aids in identifying potential biases, and facilitates independent replication and validation, which are fundamental to scientific integrity. As generative AI continues to evolve, fostering a culture of rigorous transparency is essential to ensure their safe, effective, and equitable application in clinical settings, 75 ultimately enhancing the quality of healthcare delivery and medical education.…”
Section: Discussionmentioning
confidence: 99%
“…Transparency ensures an understanding of the model’s knowledge, context, and limitations, aids in identifying potential biases, and facilitates independent replication and validation, which are fundamental to scientific integrity. As generative AI continues to evolve, fostering a culture of rigorous transparency is essential to ensure their safe, effective, and equitable application in clinical settings, 75 ultimately enhancing the quality of healthcare delivery and medical education.…”
Section: Discussionmentioning
confidence: 99%
“…This study is the first attempt to evaluate ChatGPT's ability to interpret actual clinical radiology reports, rather than from settings like image diagnosis quizzes. The majority of previous research (6)(7)(8)(9)(10)(11)(12)(17)(18)(19)(20)(21) suggested the utility of ChatGPT in diagnostics, but these relied heavily on hypothetical environments such as quizzes from academic journals or examination questions (26). This approach can lead to a cognitive bias since the individuals formulating the imaging findings or exam questions also possess the answers.…”
Section: Discussionmentioning
confidence: 99%
“…Related to these concerns is the demand for explainability of AI models, which in turn also needs to be considered when AI technology is to be used for pathway modelling, or for the integration of a chatbot or other AI supported functions into digital patient pathways. Stakeholders interested in implementing digital patient pathways ought to keep that demand in mind, which is also already widely being discussed by researchers [see, e.g., (133)(134)(135)].…”
Section: Walking Along the Pathway: Ms Treatment Could Change For The...mentioning
confidence: 99%