2022
DOI: 10.1007/s13347-022-00505-7
|View full text |Cite
|
Sign up to set email alerts
|

Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship

Abstract: The initial successes in recent years in harnessing machine learning (ML) technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 44 publications
0
1
0
Order By: Relevance
“…In the first place, he or she needs a sufficient informational basis for the validation of AI support statement in individual case situations (cf. the debate on explicability, interpretability and transparency of AI tools for health care) (2,(25)(26)(27)(28).…”
Section: Discussing the Impact Of Ai Support On The Decision-making A...mentioning
confidence: 99%
“…In the first place, he or she needs a sufficient informational basis for the validation of AI support statement in individual case situations (cf. the debate on explicability, interpretability and transparency of AI tools for health care) (2,(25)(26)(27)(28).…”
Section: Discussing the Impact Of Ai Support On The Decision-making A...mentioning
confidence: 99%