2019
DOI: 10.1001/jama.2019.18058
|View full text |Cite
|
Sign up to set email alerts
|

Addressing Bias in Artificial Intelligence in Health Care

Abstract: Recent scrutiny of artificial intelligence (AI)-based facial recognition software has renewed concerns about the un-intendedeffectsofAIonsocialbiasandinequity.Academic and government officials have raised concerns over racial and gender bias in several AI-based technologies, including internet search engines and algorithms to predict risk ofcriminalbehavior.CompanieslikeIBMandMicrosofthave made public commitments to "de-bias" their technologies, whereas Amazon mounted a public campaign criticizing such researc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
320
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 460 publications
(321 citation statements)
references
References 9 publications
1
320
0
Order By: Relevance
“…We note in particular a guide to reading the literature 10 , an accompanying editorial 11 , and a viewpoint review 12 of the National Academy of Medicine’s comprehensive exploration of AI in healthcare 13 . Possible biases in the design and development of AI systems in conjunction with EHRs have also been explored 14 , as has their remediation 15 and the potential legal liability risk for a provider using AI 16 . Considering the influential regulatory framework in the US on Software as a Medical Device, how should the lifecycle of an AI system be viewed, especially if it is adaptive and—at least in theory—self-improving 17 ?…”
Section: Resultsmentioning
confidence: 99%
“…We note in particular a guide to reading the literature 10 , an accompanying editorial 11 , and a viewpoint review 12 of the National Academy of Medicine’s comprehensive exploration of AI in healthcare 13 . Possible biases in the design and development of AI systems in conjunction with EHRs have also been explored 14 , as has their remediation 15 and the potential legal liability risk for a provider using AI 16 . Considering the influential regulatory framework in the US on Software as a Medical Device, how should the lifecycle of an AI system be viewed, especially if it is adaptive and—at least in theory—self-improving 17 ?…”
Section: Resultsmentioning
confidence: 99%
“…One particularly worrying type of error arises from underrepresentation of minorities in the training data for AI systems—such as an application for detecting melanoma that is trained only on white skin. Another is the replication of social biases such as delayed lung cancer diagnosis in patients of low socioeconomic status 1415. By mechanisms such as these, AI replicates and could even exacerbate health inequities.…”
Section: Reporting Harmmentioning
confidence: 99%
“…Indeed, it is important to continuously update and provide new patient data to algorithms so the decision making can be adapted. Statistical bias may be inherent due to suboptimal sampling, measurement error in predictor variables and heterogeneity of effects [33]. This outlines the importance of transparency to enable the technology to be as accurate as possible and to avoid potential bias [34].…”
Section: Limits Of Aimentioning
confidence: 99%