2022
DOI: 10.1007/s11019-022-10076-1
|View full text |Cite
|
Sign up to set email alerts
|

The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach

Abstract: Developments in Machine Learning (ML) have attracted attention in a wide range of healthcare fields to improve medical practice and the benefit of patients. Particularly, this should be achieved by providing more or less automated decision recommendations to the treating physician. However, some hopes placed in ML for healthcare seem to be disappointed, at least in part, by a lack of transparency or traceability. Skepticism exists primarily in the fact that the physician, as the person responsible for diagnosi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 44 publications
0
5
0
1
Order By: Relevance
“…In the face of AI support, this is changing in that the form of data evaluation is taking place in a new way, one that is difficult for humans to comprehend, and it is moving alongside the methods already used to achieve action-guiding knowledge (e.g., guidelines from medical professional societies, medical-theoretical expertise). Particularly in the case of divergent or disagreeing action-guiding knowledge, decision-making situations can arise that are difficult to resolve from a human point of view without providing underlying reasoning ( 14 , 30 , 31 ), one of the central empirically identified barriers to the use of AI support [cf. ( 8 , 9 , 32 )].…”
Section: Discussing the Impact Of Ai Support On The Decision-making A...mentioning
confidence: 99%
See 3 more Smart Citations
“…In the face of AI support, this is changing in that the form of data evaluation is taking place in a new way, one that is difficult for humans to comprehend, and it is moving alongside the methods already used to achieve action-guiding knowledge (e.g., guidelines from medical professional societies, medical-theoretical expertise). Particularly in the case of divergent or disagreeing action-guiding knowledge, decision-making situations can arise that are difficult to resolve from a human point of view without providing underlying reasoning ( 14 , 30 , 31 ), one of the central empirically identified barriers to the use of AI support [cf. ( 8 , 9 , 32 )].…”
Section: Discussing the Impact Of Ai Support On The Decision-making A...mentioning
confidence: 99%
“…This necessary information varies in different fields of application, for example due to the consequences of a clinical decision on the patient’s life [cf. for this elsewhere ( 14 )]. In such a way, a recommendation can be made for the individual patient that best corresponds to his or her well-being and avoids the inherent shortcomings of AI support tools.…”
Section: Discussing the Impact Of Ai Support On The Decision-making A...mentioning
confidence: 99%
See 2 more Smart Citations
“…Im Gegensatz zu herkömmlichen Expertensystemen, bei denen Algorithmen die ihnen zugeführten Daten regelbasiert, d. h. anhand von deterministischen Wenn-Dann-Regeln auf die immer gleiche und damit komplexe, aber vorhersehbare Art und Weise, berechnen, können nicht-regelbasierte Algorithmen sich allein auf Basis der ihnen zugeführten Daten „eigenständig“ (weiter-)entwickeln 10 . Potenziell problematisch an solchen Anwendungen ist, dass ihre Entscheidungsstruktur nicht oder nur teilweise und unter großem Aufwand von Menschen verstanden und bewertet werden kann 11 12 .…”
Section: Introductionunclassified