2021
DOI: 10.1145/3474121
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Machine Learning Models for Clinical Gait Analysis

Abstract: Machine Learning (ML) is increasingly used to support decision-making in the healthcare sector. While ML approaches provide promising results with regard to their classification performance, most share a central limitation, their black-box character. This article investigates the usefulness of Explainable Artificial Intelligence (XAI) methods to increase transparency in automated clinical gait classification based on time series. For this purpose, predictions of … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(11 citation statements)
references
References 60 publications
0
11
0
Order By: Relevance
“…2. One of the significant drawbacks of XAI-based AD diagnosis is the absence of ground truth data [164]. Several neuroimaging and clinical biomarker datasets exist for AD, but none provide ground truth to validate the explainability elicited by XAI models.…”
Section: Xai Researchers Often Resort To Self-intuition To De-mentioning
confidence: 99%
“…2. One of the significant drawbacks of XAI-based AD diagnosis is the absence of ground truth data [164]. Several neuroimaging and clinical biomarker datasets exist for AD, but none provide ground truth to validate the explainability elicited by XAI models.…”
Section: Xai Researchers Often Resort To Self-intuition To De-mentioning
confidence: 99%
“…Results demonstrate that the proposed FCN model is accurate in locating relevant time points in both case studies, and it is also more consistent as it indicates mostly the same time steps as relevant for its predictions when trained with different random initialisation. Slijepcevic et al (2021) proposed a complementary strategy to evaluate class-specific explanations for gait classification from 3-D ground reaction force (GRF) sensors data, including both quantitative and qualitative analysis. To this aim, they first trained CNN, SVM, and MLP classifiers on the GaitRec dataset (Horsak et al 2020), a clinical database including bilateral GRF measurements from 132 patients with 3 classes of orthopedic gait disorders and from 62 healthy controls, then LRP technique has been used to explain the most relevant signal characteristics learned by the models.…”
Section: Explanation Quality Assessmentmentioning
confidence: 99%
“…Alvarez Melis and Jaakkola [32] proposes faithfulness as an important metric for evaluating explainable machine learning, which is measured by removing/perturbing the feature and then measuring the drop in classification performance. Explainable machine learning has been used to analyze gait patterns for clinical analysis [33]. Horst et al [34] used LRP to study which part of the gait cycle is relevant to a non-linear machine learning model to recognize an individual.…”
Section: Related Workmentioning
confidence: 99%