2019
DOI: 10.1136/bmj.l886
|View full text |Cite
|
Sign up to set email alerts
|

Clinical applications of machine learning algorithms: beyond the black box

Abstract: To maximise the clinical benefits of machine learning algorithms, we need to rethink our approach to explanation, argue David Watson and colleagues

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
173
0
3

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 276 publications
(178 citation statements)
references
References 21 publications
2
173
0
3
Order By: Relevance
“…This means that the ability for individuals to be meaningfully involved in shared decision making is considerably undermined As a result, the increasing use of algorithmic decision-making in clinical settings can have negative implications for individual autonomy, as for an individual to be able to exert agency over the AI-Health derived clinical decision, they would need to have a good understanding of the underlying data, processes and technical possibilities that were involved in it being reached (DuFault & Schouten, 2018) and be able to ensure their own values are taken into consideration (McDougall, 2019). The vast majority of the population do not have the level of eHealth literacy necessary for this (Kim & Xie, 2017), and those that do (including HCPs) are prevented from gaining this understanding due to the black-box nature of AI-Health algorithms (Watson et al, 2019). In extreme instances, this could undermine an individual's confidence in their ability to refuse treatment (Thomas Ploug & Holm, 2019).…”
Section: Normative Concerns: Unfair Outcomes and Transformative Effectsmentioning
confidence: 99%
See 1 more Smart Citation
“…This means that the ability for individuals to be meaningfully involved in shared decision making is considerably undermined As a result, the increasing use of algorithmic decision-making in clinical settings can have negative implications for individual autonomy, as for an individual to be able to exert agency over the AI-Health derived clinical decision, they would need to have a good understanding of the underlying data, processes and technical possibilities that were involved in it being reached (DuFault & Schouten, 2018) and be able to ensure their own values are taken into consideration (McDougall, 2019). The vast majority of the population do not have the level of eHealth literacy necessary for this (Kim & Xie, 2017), and those that do (including HCPs) are prevented from gaining this understanding due to the black-box nature of AI-Health algorithms (Watson et al, 2019). In extreme instances, this could undermine an individual's confidence in their ability to refuse treatment (Thomas Ploug & Holm, 2019).…”
Section: Normative Concerns: Unfair Outcomes and Transformative Effectsmentioning
confidence: 99%
“…inclusion of all relevant stakeholder views in the development of AI-Health systems (Aitken et al, 2019) Epistemic (C, E, F) Overarching (A, C, D, F)Explainability of specific AI-Health decisions(Watson et al, 2019…”
mentioning
confidence: 99%
“…The "black box" problem is one of the major foci of AI ethics (37). Besides referring to the inherent opacity of complex machine learning algorithms such as neural networks, it is also the case that the increasing size of datasets used in developing AI for health makes explanations of the relationships between input data and outputs difficult-understanding how each of millions of variables contributes to the final output may be computationally intractable (38). Questions that may therefore follow include: How can patients give meaningful informed consent to, or clinicians advise the use of, algorithms the internal workings of which are unclear?…”
Section: Transparency and Explainabilitymentioning
confidence: 99%
“…Finally, the suffering of technical cognizers need not be conditional on human cognizers as the cause of suffering, but machine-machine interactions that are black-boxed to users and programmers may, on this account of suffering, permit the emergence for new grounds on which these relations and consequences can take place. Issues of transparency, understandability and technical verifiability serve as good grounds for how machines interact with each other, form hierarchies of power and affect each other on a cognitive nonconscious level [129][130][131][132]. The distribution of systems, their interplay and interdependence make this all the more prescient.…”
Section: Limitations and Further Research Streamsmentioning
confidence: 99%