Objective As computerized cognitive testing becomes increasingly popular in clinical and research settings, conducting studies on efficacy and psychometric properties is essential. One such program is RC21X, a web-based brain performance measurement tool. Based on empirically supported neurocognitive and neuromotor tasks, the 12-min test consists of 15 modules measuring memory, motor coordination, processing speed, and executive functioning. Because individuals may use RC21X repeatedly to track changes in cognitive performance, establishing reliability of the program is imperative. The current study examined test–retest reliability of RC21X within a 2-week period. Method The sample consisted of 222 individuals: 192 (86.5%) were male, and 30 (13.5%) were female. Average age was 44.06 years (SD = 17.76), with ages ranging from 7 to 82 years. We computed Pearson’s correlation coefficients for module and composite scores to determine reliability between performance at times 1 and 2. Results All correlations were statistically significant (p < .001). The 2-week test–retest reliability for composite score was 0.72, with subtest coefficients ranging from 0.54 on an auditory memory recognition task to 0.89 on a finger tapping task. We replicated these analyses with participants’ (n = 43) test sessions 3 and 4; we found similar results to those from test 1 and test 2 analyses, suggesting stability of results over multiple administrations. Conclusions Results for RC21X were comparable to existing literature that supports moderate to high reliability of other computer-based tests. Although future research needs to investigate validity of RC21X, our findings support potential applications in research, clinical use, and personal brain performance measurement.
BACKGROUND As technology advances to create new devices capable of neuropsychological measurements, new possibilities emerge. Smartphones are one such device that many people own. Smartphone-based measurements may increase consumer, clinician, and researcher access to neuropsychological data. To explore smartphones for this purpose, it is important to determine their psychometric rigor. OBJECTIVE Due to convenience and accessibility, smartphone measurements may complement and surpass computerized neuropsychological and paper-and-pencil measurements but lack psychometric evaluation. We analyzed test-retest reliability of a neuropsychological self-monitoring app named Roberto. METHODS Participants downloaded and self-administered Roberto, unsupervised and without assigned location. Roberto scoring used a multiplicative approach derived from general systems performance theory (GSPT) and the elemental resource model (ERM) for human performance. Various modules measured neuromotor and cognitive performance during a six-minute Roberto administration session. We analyzed data from users who completed their first two sessions within 14 days (n = 69). RESULTS Analyses included Pearson’s product-moment correlations for module scores to assess reliability, a paired samples t-test (t = -3.087, p = 0.003) on composite scores for sessions 1 and 2 to assess practice effects, and a repeated-measures MANOVA using session number as an independent variable and module scores as a dependent variable to determine if module scores differed between sessions 1 and 2, followed by univariate ANOVAs to determine which modules differed. Reliability coefficients ranged from 0.296 to 0.725 (p values less than 0.05); most coefficients were above 0.53. We found practice effects; scores on session 2 were 46% higher than on session 1, and MANOVA revealed significant differences between sessions, and univariate ANOVAs show that three modules had significantly different scores on session 2 than session 1. CONCLUSIONS Results demonstrate feasibility of a smartphone app used in ecologically robust ways, even in suboptimal conditions. Future research needs to evaluate reliability between test environments. Additional studies need to investigate differences between smartphone type and specifications, such as screen size and operating system. Furthermore, our study found practice effects over multiple Roberto administrations; this introduces the need to investigate the dynamics of these effects on smartphones. Critically, future work needs to investigate validity of neuropsychological data obtained from short measurement sessions compared to full neuropsychological exams.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.