2022
DOI: 10.1136/bmjhci-2021-100459
|View full text |Cite
|
Sign up to set email alerts
|

Conceptualising fairness: three pillars for medical algorithms and health equity

Abstract: ObjectivesFairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(23 citation statements)
references
References 166 publications
(279 reference statements)
0
23
0
Order By: Relevance
“…This was the strongest theme of the literature and conveyed manifold concerns about AI ethics and regulation ( Zelmer et al, 2018 ; Abdullah et al, 2021 ); ethical design and use of AI technologies in healthcare contexts ( Sikstrom et al, 2022 ); concerns about data privacy, data biases and data collection ( Harris, 2021 ; Ostherr, 2022 ); as well as concerns about trust, care quality, and liability ( Davenport and Kalakota, 2019 ; Sanal et al, 2019 ). There is a strong anticipation perspective relating to concerns about role replacement ( Johnston, 2018 ; Blease et al, 2019 ; Bridge and Bridge, 2019 ; Powell, 2019 ; Blease et al, 2020 ; Doraiswamy et al, 2020 ; Alrassi et al, 2021 ) and which parts of healthcare practice, can and should be entrusted to AI technologies ( Loftus et al, 2020 ; Nadin, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
“…This was the strongest theme of the literature and conveyed manifold concerns about AI ethics and regulation ( Zelmer et al, 2018 ; Abdullah et al, 2021 ); ethical design and use of AI technologies in healthcare contexts ( Sikstrom et al, 2022 ); concerns about data privacy, data biases and data collection ( Harris, 2021 ; Ostherr, 2022 ); as well as concerns about trust, care quality, and liability ( Davenport and Kalakota, 2019 ; Sanal et al, 2019 ). There is a strong anticipation perspective relating to concerns about role replacement ( Johnston, 2018 ; Blease et al, 2019 ; Bridge and Bridge, 2019 ; Powell, 2019 ; Blease et al, 2020 ; Doraiswamy et al, 2020 ; Alrassi et al, 2021 ) and which parts of healthcare practice, can and should be entrusted to AI technologies ( Loftus et al, 2020 ; Nadin, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
“…Fairness is not just the result of rigorous and thoughtful research, but rather the social and political processes needed to advance health equity. 92 With machine learning and artificial intelligence models gaining more attention, we should be aware of these issues when designing the models and appropriately mitigate them.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, we introduced three types of bias mitigation methods, namely, pre-processing, in-processing and post-processing, and listed the popular software libraries and tools for bias evaluation and mitigation. Fairness is not just the result of rigorous and thoughtful research, but rather the social and political processes needed to advance health equity 99 . With machine learning and artificial intelligence models gaining more and more attentions, we should be aware of these issues when designing the models and appropriately mitigate them.…”
Section: Discussionmentioning
confidence: 99%