Parkinson’s disease (PD) is a common neurological disorder, with bradykinesia being one of its cardinal features. Objective quantification of bradykinesia using computer vision has the potential to standardise decision-making, for patient treatment and clinical trials, while facilitating remote assessment. We utilised a dataset of part-3 MDS-UPDRS motor assessments, collected at four independent clinical and one research sites on two continents, to build computer-vision-based models capable of inferring the correct severity rating robustly and consistently across all identifiable subgroups of patients. These results contrast with previous work limited by small sample sizes and small numbers of sites. Our bradykinesia estimation corresponded well with clinician ratings (interclass correlation 0.74). This agreement was consistent across four clinical sites. This result demonstrates how such technology can be successfully deployed into existing clinical workflows, with consumer-grade smartphone or tablet devices, adding minimal equipment cost and time.
Over the last decade, video-enabled mobile devices have become ubiquitous, while advances in markerless pose estimation allow an individual's body position to be tracked accurately and efficiently across the frames of a video. Previous work by this and other groups has shown that pose-extracted kinematic features can be used to reliably measure motor impairment in Parkinson's disease (PD). This presents the prospect of developing an asynchronous and scalable, video-based assessment of motor dysfunction. Crucial to this endeavour is the ability to automatically recognise the class of an action being performed, without which manual labelling is required.Representing the evolution of body joint locations as a spatio-temporal graph, we implement a deep-learning model for video and frame-level classification of activities performed according to part 3 of the Movement Disorder Society Unified PD Rating Scale (MDS-UPDRS). We train and validate this system using a dataset of n=7310 video clips, recorded at 5 independent sites. This approach reaches human-level performance in detecting and classifying periods of activity within monocular video clips. Our framework could support clinical workflows and patient care at scale through applications such as quality monitoring of clinical data collection, automated labelling of video streams, or a module within a remote selfassessment system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.