Objective: Smartphones have the potential for capturing subtle changes in cognition that characterize preclinical Alzheimer’s disease (AD) in older adults. The Ambulatory Research in Cognition (ARC) smartphone application is based on principles from ecological momentary assessment (EMA) and administers brief tests of associative memory, processing speed, and working memory up to 4 times per day over 7 consecutive days. ARC was designed to be administered unsupervised using participants’ personal devices in their everyday environments. Methods: We evaluated the reliability and validity of ARC in a sample of 268 cognitively normal older adults (ages 65–97 years) and 22 individuals with very mild dementia (ages 61–88 years). Participants completed at least one 7-day cycle of ARC testing and conventional cognitive assessments; most also completed cerebrospinal fluid, amyloid and tau positron emission tomography, and structural magnetic resonance imaging studies. Results: First, ARC tasks were reliable as between-person reliability across the 7-day cycle and test-retest reliabilities at 6-month and 1-year follow-ups all exceeded 0.85. Second, ARC demonstrated construct validity as evidenced by correlations with conventional cognitive measures (r = 0.53 between composite scores). Third, ARC measures correlated with AD biomarker burden at baseline to a similar degree as conventional cognitive measures. Finally, the intensive 7-day cycle indicated that ARC was feasible (86.50% approached chose to enroll), well tolerated (80.42% adherence, 4.83% dropout), and was rated favorably by older adult participants. Conclusions: Overall, the results suggest that ARC is reliable and valid and represents a feasible tool for assessing cognitive changes associated with the earliest stages of AD.
The COVID-19 pandemic has increased adoption of remote assessments in clinical research. However, longstanding stereotypes persist regarding older adults' technology familiarity and their willingness to participate in technology-enabled remote studies. We examined the validity of these stereotypes using a novel technology familiarity assessment (n = 342) and with a critical evaluation of participation factors from an intensive smartphone study of cognition in older adults (n = 445). The technology assessment revealed that older age was strongly associated with less technology familiarity, less frequent engagement with technology, and higher difficulty ratings. Despite this, the majority (86.5%) of older adults elected to participate in the smartphone study and showed exceptional adherence (85.7%). Furthermore, among those enrolled, neither technology familiarity, knowledge, perceived difficulty, nor gender, race, or education were associated with adherence. These results suggest that while older adults remain significantly less familiar with technology than younger generations, with thoughtful study planning that emphasizes participant support and user-centered design, they are willing and capable participants in technology-enabled studies. And once enrolled, they are remarkably adherent.
Despite several meta-analyses suggesting that age differences in attentional control are "greatly exaggerated," there have been multiple reports of disproportionate age differences in the Stroop effect. The Stroop task is widely accepted as the gold standard for assessing attentional control and has been critical in comparisons across development and in studies of neuropsychological patient groups. However, accounting for group differences in processing speed is a notorious challenge in interpreting reaction time (RT) data. Within the aging literature, prior meta-analyses have relied on Brinley and State-Trace techniques to account for overall processing speed differences in evaluating the effects of within-participant manipulations. Such analyses are based on mean performance per group per study and have been criticized as potentially being insensitive to within-participant manipulations. In order to further examine possible age differences in Stroop performance, we amassed a dataset from 33 different computerized, color-naming Stroop task studies with available trial-level data from 2,896 participants. We conducted meta-regression analyses on a wide set of dependent measures that control for general slowing, tested for publication bias, and examined four potential methodological moderators. We also conducted linear mixed-effect modeling allowing the intercept to vary randomly for each participant, thereby accounting for individual differences in processing speed. All analyses, with the exception of the Brinley and State-Trace techniques, produced clear evidence supporting a disproportionate age difference in the Stroop effect above and beyond the effects of general slowing. Discussion highlights the importance of trial-level data in accounting for group differences in processing speed.
The present study investigated the contribution of dispositional factors in accounting for the perplexing negative relationship between aging and mind-wandering (MW). First, we sought to examine whether experimentally manipulating participants' motivation during a modified Sustained Attention to Response Task (SART) would modulate sustained attention performance and MW reports for younger and older adults. Results indicated that a performance-based motivational incentive influenced self-reported motivation and objective measures of sustained attention performance for younger, but not older, adults as compared to a control block. However, the motivation manipulation did not significantly modulate either younger or older adults' MW reports. Second, we tested the unique contributions of conscientiousness, interest, and motivation in predicting state-level, trait-level, and SART MW reports along with a composite measure of all three predictors. The results from a series of mediation and regression analyses indicated (a) that conscientiousness and interest fully accounted for the relationship between age and four different self-reported MW estimates and (b) that self-reported motivation did not account for any unique variance in predicting MW reports above and beyond age. The dispositional factors also accounted for the observed differences in No-Go accuracy but did not fully account for the age differences in the coefficient of variation. Discussion focuses on distinctions between self-report and objective measures of MW and more general implications of considering dispositional factors in cognitive aging research.
Studies using remote cognitive testing must make a critical decision: whether to allow participants to use their own devices or to provide participants with a study-specific device.Bring-your-own-device (BYOD) studies have several advantages including increased accessibility, potential for larger sample sizes, and reduced participant burden. However, BYOD studies offer little control over device performance characteristics that could potentially influence results. In particular, response times measured by each device not only include the participant's true response time, but also latencies of the device itself. The present study investigated two prominent sources of device latencies that pose significant risks data quality: device display output latency and touchscreen input latency. We comprehensively tested 26 popular smartphones ranging in price from <$100 to $1000+ running either Android or iOS to determine if hardware and operating system differences led to appreciable device latency variability. To accomplish this, a custom-built device called the Latency and Timing Assessment Robot (LaTARbot) measured device display output and capacitive touchscreen input latencies. We found considerable variability across smartphones in display and touch latencies which, if unaccounted for, could be misattributed as individual or group differences in response times. Specifically, total device (sum of display and touch) latencies ranged from 35 to 140 ms. We offer recommendations to researchers to increase the precision of data collection and analysis in the context of remote BYOD studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.