Unconstrained human movement can be broken down into a series of stereotyped motifs or 'syllables' in an unsupervised fashion. Sequences of these syllables can be represented by symbols and characterized by a statistical grammar which varies with external situational context and internal neurological state. By first constructing a Markov chain from the transitions between these syllables then calculating the stationary distribution of this chain, we estimate the overall severity of parkinson's symptoms by capturing the increasingly disorganized transitions between syllables as motor impairment increases. comparing stationary distributions of movement syllables has several advantages over traditional neurologist administered in-clinic assessments. this technique can be used on unconstrained at-home behavior as well as scripted in-clinic exercises, it avoids differences across human evaluators, and can be used continuously without requiring scripted tasks be performed. We demonstrate the effectiveness of this technique using movement data captured with commercially available wrist worn sensors in 35 participants with Parkinson's disease in-clinic and 25 participants monitored at home.
The dynamics of the human fingertip enable haptic sensing and the ability to manipulate objects in the environment. Here we describe a wearable strain sensor, associated electronics, and software to detect and interpret the kinematics of deformation in human fingernails. Differential forces exerted by fingertip pulp, rugged connections to the musculoskeletal system and physical contact with the free edge of the nail plate itself cause fingernail deformation. We quantify nail warpage on the order of microns in the longitudinal and lateral axes with a set of strain gauges attached to the nail. The wearable device transmits raw deformation data to an off-finger device for interpretation. Simple motions, gestures, finger-writing, grip strength, and activation time, as well as more complex idioms consisting of multiple grips, are identified and quantified. We demonstrate the use of this technology as a human-computer interface, clinical feature generator, and means to characterize workplace tasks.
Background Facial expressions require the complex coordination of 43 different facial muscles. Parkinson disease (PD) affects facial musculature leading to “hypomimia” or “masked facies.” Objective We aimed to determine whether modern computer vision techniques can be applied to detect masked facies and quantify drug states in PD. Methods We trained a convolutional neural network on images extracted from videos of 107 self-identified people with PD, along with 1595 videos of controls, in order to detect PD hypomimia cues. This trained model was applied to clinical interviews of 35 PD patients in their on and off drug motor states, and seven journalist interviews of the actor Alan Alda obtained before and after he was diagnosed with PD. Results The algorithm achieved a test set area under the receiver operating characteristic curve of 0.71 on 54 subjects to detect PD hypomimia, compared to a value of 0.75 for trained neurologists using the United Parkinson Disease Rating Scale-III Facial Expression score. Additionally, the model accuracy to classify the on and off drug states in the clinical samples was 63% (22/35), in contrast to an accuracy of 46% (16/35) when using clinical rater scores. Finally, each of Alan Alda’s seven interviews were successfully classified as occurring before (versus after) his diagnosis, with 100% accuracy (7/7). Conclusions This proof-of-principle pilot study demonstrated that computer vision holds promise as a valuable tool for PD hypomimia and for monitoring a patient’s motor state in an objective and noninvasive way, particularly given the increasing importance of telemedicine.
Background In contrast to all other areas of medicine, psychiatry is still nearly entirely reliant on subjective assessments such as patient self-report and clinical observation. The lack of objective information on which to base clinical decisions can contribute to reduced quality of care. Behavioral health clinicians need objective and reliable patient data to support effective targeted interventions. Objective We aimed to investigate whether reliable inferences—psychiatric signs, symptoms, and diagnoses—can be extracted from audiovisual patterns in recorded evaluation interviews of participants with schizophrenia spectrum disorders and bipolar disorder. Methods We obtained audiovisual data from 89 participants (mean age 25.3 years; male: 48/89, 53.9%; female: 41/89, 46.1%): individuals with schizophrenia spectrum disorders (n=41), individuals with bipolar disorder (n=21), and healthy volunteers (n=27). We developed machine learning models based on acoustic and facial movement features extracted from participant interviews to predict diagnoses and detect clinician-coded neuropsychiatric symptoms, and we assessed model performance using area under the receiver operating characteristic curve (AUROC) in 5-fold cross-validation. Results The model successfully differentiated between schizophrenia spectrum disorders and bipolar disorder (AUROC 0.73) when aggregating face and voice features. Facial action units including cheek-raising muscle (AUROC 0.64) and chin-raising muscle (AUROC 0.74) provided the strongest signal for men. Vocal features, such as energy in the frequency band 1 to 4 kHz (AUROC 0.80) and spectral harmonicity (AUROC 0.78), provided the strongest signal for women. Lip corner–pulling muscle signal discriminated between diagnoses for both men (AUROC 0.61) and women (AUROC 0.62). Several psychiatric signs and symptoms were successfully inferred: blunted affect (AUROC 0.81), avolition (AUROC 0.72), lack of vocal inflection (AUROC 0.71), asociality (AUROC 0.63), and worthlessness (AUROC 0.61). Conclusions This study represents advancement in efforts to capitalize on digital data to improve diagnostic assessment and supports the development of a new generation of innovative clinical tools by employing acoustic and facial data analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.