Background Smartphones and their built-in sensors allow for measuring functions in disease-related domains through mobile tests. This could improve disease characterization and monitoring, and could potentially support treatment decisions for multiple sclerosis (MS), a multifaceted chronic neurological disease with highly variable clinical manifestations. Practice effects can complicate the interpretation of both improvement over time by potentially exaggerating treatment effects and stability by masking deterioration. Objective The aim of this study is to identify short-term learning and long-term practice effects in 6 active tests for cognition, dexterity, and mobility in user-scheduled, high-frequency smartphone-based testing. Methods We analyzed data from 264 people with self-declared MS with a minimum of 5 weeks of follow-up and at least 5 repetitions per test in the Floodlight Open study, a self-enrollment study accessible by smartphone owners from 16 countries. The collected data are openly available to scientists. Using regression and bounded growth mixed models, we characterized practice effects for the following tests: electronic Symbol Digit Modalities Test (e-SDMT) for cognition; Finger Pinching and Draw a Shape for dexterity; and Two Minute Walk, U-Turn, and Static Balance for mobility. Results Strong practice effects were found for e-SDMT (n=4824 trials), Finger Pinching (n=19,650), and Draw a Shape (n=19,019) with modeled boundary improvements of 40.8% (39.9%-41.6%), 86.2% (83.6%-88.7%), and 23.1% (20.9%-25.2%) over baseline, respectively. Half of the practice effect was reached after 11 repetitions for e-SDMT, 28 repetitions for Finger Pinching, and 17 repetitions for Draw a Shape; 90% was reached after 35, 94, and 56 repetitions, respectively. Although baseline performance levels were highly variable across participants, no significant differences between the short-term learning effects in low performers (5th and 25th percentile), median performers, and high performers (75th and 95th percentile) were found for e-SDMT up to the fifth trial (β=1.50-2.00). Only small differences were observed for Finger Pinching (β=1.25-2.5). For U-Turn (n=15,051) and Static Balance (n=16,797), only short-term learning effects could be observed, which ceased after a maximum of 5 trials. For Two Minute Walk (n=14,393), neither short-term learning nor long-term practice effects were observed. Conclusions Smartphone-based tests are promising for monitoring the disease trajectories of MS and other chronic neurological diseases. Our findings suggest that strong long-term practice effects in cognitive and dexterity functions have to be accounted for to identify disease-related changes in these domains, especially in the context of personalized health and in studies without a comparator arm. In contrast, changes in mobility may be more easily interpreted because of the absence of long-term practice effects, even though short-term learning effects might have to be considered.
Background There is an unmet need for reliable and sensitive measures for better monitoring people with multiple sclerosis (PwMS) to detect disease progression early and adapt therapeutic measures accordingly. Objective To assess reliability of extracted features and meaningfulness of 11 tests applied through a smartphone application (“dreaMS”). Methods PwMS (age 18–70 and EDSS ≤ 6.5) and matched healthy volunteers (HV) were asked to perform tests installed on their smartphone once or twice weekly for 5 weeks. Primary outcomes were test–retest reliability of test features (target: intraclass correlation [ICC] ≥ 0.6 or median coefficient of variation [mCV] < 0.2) and reported meaningfulness of the tests by PwMS. Meaningfulness was self-assessed for each test on a 5-point Likert scale (target: mean score of > 3) and by a structured interview. ClinicalTrials.gov Identifier: NCT04413032. Results We included 31 PwMS (21 [68%] female, mean age 43.4 ± 12.0 years, median EDSS 3.0 [range 1.0–6.0]) and 31 age- and sex-matched healthy volunteers. Out of 133 features extracted from 11 tests, 89 met the preset reliability criteria. All 11 tests were perceived as highly meaningful to PwMS. Conclusion The dreaMS app reliably assessed features reflecting key functional domains meaningful to PwMS. More studies with longer follow-up are needed to prove validity of these measures as digital biomarkers in PwMS.
Background Cognitive impairment occurs in up to 70% of people with MS (pwMS) and has a large impact on quality of life and working capacity. As part of the development of a smartphone-app (dreaMS) for monitoring MS disease activity and progression, we assessed the feasibility and acceptance of using cognitive games as assessment tools for cognitive domains. Methods We integrated ten cognitive games in the dreaMS app. Participants were asked to play these games twice a week for 5 weeks. All subjects underwent a battery of established neuropsychological tests. User feedback on acceptance was obtained via a five-point Likert-scale questionnaire. We correlated game performance measures with predetermined reference tests (Spearman’s rho) and analyzed differences between pwMS and Healthy Controls (rank biserial correlation). Results We included 31 pwMS (mean age 43.4 ± 12.0 years; 68% females; median Expanded Disability Status Scale score 3.0, range 1.0–6.0) and 31 age- and sex-matched HC. All but one game showed moderate–strong correlations with their reference tests, (|rs|= 0.34–0.77). Performance improved in both groups over the 5 weeks. Average ratings for overall impression and meaningfulness were 4.6 (range 4.2–4.9) and 4.7 (range 4.5–4.8), respectively. Conclusion Moderate–strong correlations with reference tests suggest that adaptive cognitive games may be used as measures of cognitive domains. The practice effects observed suggest that game-derived measures may capture change over time. All games were perceived as enjoyable and meaningful, features crucial for long-term adherence. Our results encourage further validation of adaptive cognitive games as monitoring tools for cognition in larger studies of longer duration. Study Register ClinicalTrials.gov: NCT04413032.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.