2022
DOI: 10.1007/s43681-022-00177-1
|View full text |Cite
|
Sign up to set email alerts
|

Explainability as fig leaf? An exploration of experts’ ethical expectations towards machine learning in psychiatry

Abstract: The increasing implementation of programs supported by machine learning in medical contexts will affect psychiatry. It is crucial to accompany this development with careful ethical considerations informed by empirical research involving experts from the field, to identify existing problems, and to address them with fine-grained ethical reflection. We conducted semi-structured qualitative interviews with 15 experts from Germany and Switzerland with training in medicine and neuroscience on the assistive use of m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 66 publications
0
5
0
Order By: Relevance
“…A detailed analysis of our main findings concerning the ethical dimension of using ML in psychiatry is provided elsewhere (Starke et al, 2022). In this manuscript, we focus on the impact of ML on psychiatric nosology, allowing for a more indepth conceptual reflection.…”
Section: Methodsmentioning
confidence: 99%
“…A detailed analysis of our main findings concerning the ethical dimension of using ML in psychiatry is provided elsewhere (Starke et al, 2022). In this manuscript, we focus on the impact of ML on psychiatric nosology, allowing for a more indepth conceptual reflection.…”
Section: Methodsmentioning
confidence: 99%
“…However, there is no unified definition or acceptability about what and when AI is transparent. Considering that an explainable AI equals ethical AI might be a fig leaf where AI developers cover methodological shortfalls by providing end-users with a false understanding (Starke et al, 2022 ). In contrast, when these principles aim to provide a basis for technical assurance, they should be described as technically feasible and operationalizable.…”
Section: Lack Of Standard Definition Of Aimentioning
confidence: 99%
“…As an additional factor altering the shared decisionmaking process between professionals and patients, algorithms undermine clinicians' perceived authority and impact their judgment. In fact, despite the increasing influence of AI recommendations, in instances where AI judgment conflicts with human judgment, the responsibility to authorise the treatment remains with the professional, who must feel empowered to make autonomous decisions (111).…”
Section: Autonomy and Informed Consentmentioning
confidence: 99%