2023
DOI: 10.1136/jme-2022-108850
|View full text |Cite
|
Sign up to set email alerts
|

Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives

Abstract: BackgroundThere is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race).ObjectivesOur objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias.MethodologyThe study involves in-depth, semistructured interviews with healthcare workers, screening programme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(15 citation statements)
references
References 44 publications
0
15
0
Order By: Relevance
“…They also emphasised rigorous evaluation and fairness, aspects that may be neglected by commercial producers of health care AI. Reported breakthroughs in health care machine learning have often not been supported by more methodologically rigorous scrutiny, 19 and evaluations of health care AI have often focused on overall accuracy rather than bias or fairness 13 . The jury's recommendations suggest that a well informed public might reject these approaches as unjustifiable.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…They also emphasised rigorous evaluation and fairness, aspects that may be neglected by commercial producers of health care AI. Reported breakthroughs in health care machine learning have often not been supported by more methodologically rigorous scrutiny, 19 and evaluations of health care AI have often focused on overall accuracy rather than bias or fairness 13 . The jury's recommendations suggest that a well informed public might reject these approaches as unjustifiable.…”
Section: Discussionmentioning
confidence: 99%
“…The four speakers directly answered final questions at the end of the first face‐to‐face day. On the second day, a world café‐style session 12 helped jurors discuss and record their insights about the benefits, harms, and bias and fairness of AI in health care 13 . Jurors then developed a list of questions that might require recommendations, which the research team sorted into draft categories; the entire jury finalised the category list together.…”
Section: Methodsmentioning
confidence: 99%
“…Technological advancements can sometimes exacerbate existing inequalities. LGBTQ individuals with limited access to technology or digital literacy might be left behind in terms of benefiting from positive AI impacts [ 51 - 53 ].…”
Section: Discussionmentioning
confidence: 99%
“…The use of big chemical data and data-driven approaches, however, comes with risks. The problems of biased training data producing biased ML algorithms and, more generally, analysis of biased data producing biased results are well documented across the natural and social sciences. Misleading results will not only confuse and distract scientific progress but also undermine public confidence in science itself. To manage difficulties associated with big data, data scientists have developed several metrics for assessing the reliability and accessibility of results from big data use. The “five Vs” velocity, volume, value, variety and veracityserve as use principles for generating reliable results from big data (Figure ).…”
Section: Chemistry’s Big Data Eramentioning
confidence: 99%