2022
DOI: 10.3390/a15100353
|View full text |Cite
|
Sign up to set email alerts
|

Using Explainable AI (XAI) for the Prediction of Falls in the Older Population

Abstract: The prevention of falls in older people requires the identification of the most important risk factors. Frailty is associated with risk of falls, but not all falls are of the same nature. In this work, we utilised data from The Irish Longitudinal Study on Ageing to implement Random Forests and Explainable Artificial Intelligence (XAI) techniques for the prediction of different types of falls and analysed their contributory factors using 46 input features that included those of a previously investigated frailty… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…In addition, due to low numbers we did not study more granular transitions in falls subtypes (e.g. injurious, with recalled loss of consciousness) [ 27 ]. Our findings may not be generalisable and should be validated in clinical populations and external datasets.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, due to low numbers we did not study more granular transitions in falls subtypes (e.g. injurious, with recalled loss of consciousness) [ 27 ]. Our findings may not be generalisable and should be validated in clinical populations and external datasets.…”
Section: Discussionmentioning
confidence: 99%
“…The inclusion of TUG, MoCA and OH is due to previous work that described a ‘Bermuda triangle’ of falls risk in older people, characterised by gait disorder, cognitive impairment, and postural hypotension [ 25 , 26 ]. Other covariates were chosen based on previous research and an emphasis on including potentially modifiable factors [ 27–30 ]. All covariates were the values from Wave 1 assessment and remained constant across waves.…”
Section: Methodsmentioning
confidence: 99%
“…Indeed, explaining the predictions made by AI algorithms is mandatory for healthcare professionals and patients in order to check medical hypotheses and conduct assessments based on AI-powered clinical decision support systems [12]. This is particularly important in fall event prediction, where healthcare professionals need to understand the factors that contribute to an individual's risk of falling in order to develop effective prevention strategies [13].…”
Section: Introductionmentioning
confidence: 99%
“…Yet, deep learning models show clear limitations when it comes to interpretability due to the complex calculus that they introduce and the depth of their layers [37]. As a result, explainable AI was introduced to address this issue [13]. Attention mechanism models, also known as self-attention (SA) models, utilize an architecture that creates similarity links inside an input sequence in order to assign importance weights to its components [38].…”
Section: Introductionmentioning
confidence: 99%