2023
DOI: 10.2196/41089
|View full text |Cite
|
Sign up to set email alerts
|

Artificial Intelligence Bias in Health Care: Web-Based Survey

Abstract: Background Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers. Objective This study’s objective was to survey AI specialists in health ca… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 44 publications
1
3
0
Order By: Relevance
“…Lastly, in additional analysis, we found evidence suggesting that the effect of subject-targeting discrimination could be more pronounced among the non-binary group than among females and males. This aligns with existing evidence of the prevalence of AI bias against gender minorities in society and their reactions to the emerging technology (e.g., Fosch-Villaronga et al, 2021;Vorisek et al, 2023).…”
Section: Discussionsupporting
confidence: 86%
“…Lastly, in additional analysis, we found evidence suggesting that the effect of subject-targeting discrimination could be more pronounced among the non-binary group than among females and males. This aligns with existing evidence of the prevalence of AI bias against gender minorities in society and their reactions to the emerging technology (e.g., Fosch-Villaronga et al, 2021;Vorisek et al, 2023).…”
Section: Discussionsupporting
confidence: 86%
“…Currently, AI cannot understand non-verbal cues or body language. Also, bias in data and inaccuracy is troubling [40, 41].…”
Section: Discussionmentioning
confidence: 99%
“…There is overwhelming evidence that there are biases in various artificial intelligence models (AIMs) applied to machine learning algorithms (MLAGs) used in health care and other industries [ 7 , 16 , 35 , 36 , 37 , 38 , 39 ]. The uses of some of these MLAGs impact and affect many lives and livelihoods, and in many cases, they eventually prove to be devastating to those affected by them [ 22 , 23 , 40 ].…”
Section: The Current State Of Machine Learning (Ml) Models In Health ...mentioning
confidence: 99%
“…Herein, we will be referring to “bias” not as a lack of internal validity or the imprecise gauging of a relationship(s) between a given exposure and an outcome or effect in a population with particular characteristics [ 1 ], although these are important aspects of other types of bias, but rather to describe the problems associated with gathering, generating, processing, training, and evaluating data that might lead to preconceived notions or prejudices and discrimination on the basis of sociodemographic features [ 2 , 3 , 4 , 5 , 6 ]. Specifically, we are presenting bias in AIMs, also known as algorithmic bias, described as a model or MLAG yielding a systematically wrong outcome because of differential considerations of certain informational aspects, such as gender, age, race, ethnicity, and socioeconomic status (SES) contained in datasets [ 7 ]. These learned/training data biases from human input, when heavily and/or blindly relied on health care, perpetuate human-like biases towards these discriminatory informational attributes [ 8 ].…”
Section: Introductionmentioning
confidence: 99%