Previous studies have recommended that multiple measures be employed concurrently to provide converging evidence regarding the presence of suspect effort during neuropsychological assessment. However, if the tests are highly correlated they do not represent independent sources of information. To date, no study has examined correspondence between effort tests. The present study assessed the relationships between eight measures which can be used to assess effort (Rey 15-item, Rey Dot Counting Test, Rey Word Recognition Test, RAVLT recognition trial, Rey-Osterrieth Complex Figure Test effort equation, Digit Span, Warrington Recognition Memory Test-Words, and "b" Test) in a sample of 105 patients in litigation or attempting to obtain/maintain disability compensation and who displayed noncredible symptoms based on psychometric performance and behavioral criteria. Modest to moderate correlations were observed between test summary scores with only two measures sharing more than 50% score variance (Digit Span and Dot Counting). Moderate correlations were also observed between individual test scores reflecting indices of response time, free recall, recognition, and false positive errors, providing possible evidence that patients may use specific strategies when producing noncredible performances. Overall the results suggest that the use of these various tests generally provides nonredundant data regarding patient credibility in neuropsychological evaluations.
Self-report measures are commonly relied upon in military healthcare environments to assess service members following a mild traumatic brain injury (mTBI). However, such instruments are susceptible to over-reporting and rarely include validity scales. This study evaluated the utility of the mild Brain Injury Atypical Symptoms scale (mBIAS) and the Neurobehavioral Symptom Inventory Validity-10 scale to detect symptom over-reporting. A total of 359 service members with a reported history of mTBI were separated into two symptom reporting groups based on MMPI-2-RF validity scales (i.e., non-over-reporting versus symptom over-reporting). The clinical utility of the mBIAS and Validity-10 as diagnostic indicators and screens of symptom over-reporting were evaluated by calculating sensitivity, specificity, positive test rate, positive predictive power (PPP), and negative predictive power (NPP) values. An mBIAS cut score of ≥10 was optimal as a diagnostic indicator, which resulted in high specificity and PPP; however, sensitivity was low. The utility of the mBIAS as a screening instrument was limited. A Validity-10 cut score of ≥33 was optimal as a diagnostic indicator. This resulted in very high specificity and PPP, but low sensitivity. A Validity-10 cut score of ≥7 was considered optimal as a screener, which resulted in moderate sensitivity, specificity, NPP, but relatively low PPP. Owing to low sensitivity, the current data suggests that both the mBIAS and Validity-10 are insufficient as stand-alone measures of symptom over-reporting. However, Validity-10 scores above the identified cut-off of ≥7should be taken as an indication that further evaluation to rule out symptom over-reporting is necessary.
Clinical practice guidelines support cognitive rehabilitation for people with a history of mild traumatic brain injury (mTBI) and cognitive impairment, but no class I randomized clinical trials have evaluated the efficacy of self-administered computerized cognitive training. The goal of this study was to evaluate the efficacy of a self-administered computerized plasticity-based cognitive training programmes in primarily military/veteran participants with a history of mTBI and cognitive impairment. A multisite randomized double-blind clinical trial of a behavioural intervention with an active control was conducted from September 2013 to February 2017 including assessments at baseline, post-training, and after a 3-month follow-up period. Participants self-administered cognitive training (experimental and active control) programmes at home, remotely supervised by a healthcare coach, with an intended training schedule of 5 days per week, 1 h per day, for 13 weeks. Participants (149 contacted, 83 intent-to-treat) were confirmed to have a history of mTBI (mean of 7.2 years post-injury) through medical history/clinician interview and persistent cognitive impairment through neuropsychological testing and/or quantitative participant reported measure. The experimental intervention was a brain plasticity-based computerized cognitive training programme targeting speed/accuracy of information processing, and the active control was composed of computer games. The primary cognitive function measure was a composite of nine standardized neuropsychological assessments, and the primary directly observed functional measure a timed instrumental activities of daily living assessment. Secondary outcome measures included participant-reported assessments of cognitive and mental health. The treatment group showed an improvement in the composite cognitive measure significantly larger than that of the active control group at both the post-training [+6.9 points, confidence interval (CI) +1.0 to +12.7, P = 0.025, d = 0.555] and the follow-up visit (+7.4 points, CI +0.6 to +14.3, P = 0.039, d = 0.591). Both large and small cognitive function improvements were seen twice as frequently in the treatment group than in the active control group. No significant between-group effects were seen on other measures, including the directly-observed functional and symptom measures. Statistically equivalent improvements in both groups were seen in depressive and cognitive symptoms.
Objective Recent research has examined potential influences to performance validity testing beyond intentional feigning. The current study sought to examine the hypothesized relationships of two psychological constructs (self-efficacy and health locus of control) with performance validity testing (PVT). Method Retrospective review of 158 mild traumatic brain injury (mTBI) cases referred to an Army outpatient clinic for neuropsychological evaluation. The mTBI cases were classified according to passing or failing the Medical Symptom Validity Test (MSVT) or Non-Verbal Medical Symptom Validity Test (NV-MSVT). Group comparisons were performed utilizing one-way ANOVA to evaluate the differences between the PVT-Pass and PVT-Fail groups on self-efficacy (MMPI-2-RF Inefficacy scale) and locus of control (Multi-Dimensional Health Locus of Control). Results There was no relationship between self-efficacy or health locus of control and passing/failing PVTs. Conclusions Further research is warranted to explore potential influences on PVT performance, which we conceptualize as analogous to experimental nuisance variables that may be amenable to intervention.
The current study tests the hypothesis that the "mountains and valleys pattern" (MVP) observed within the Attention and Concentration domain of the Meyers Neuropsychological Battery reflects the interference of emotional distress/anxiety on the patient's cognitive test performance. First, the MVP was objectively quantified using a formula that took into account both increased and decreased scores, rather than canceling them out through averaging. Using a total sample of 787 subjects, the Minnesota Multiphasic Personality Inventory-Second Edition Restructured Form (MMPI-2-RF) profile scores of cases with and without this pattern were then compared using an extensive database followed by a smaller, matched-groups design. The presence of the MVP was related to MMPI-2-RF test performance. In particular, this pattern was related to emotional distress/anxiety scales but was not related to scales reflecting neurological or cognitive complaints. The degree of emotional distress experienced may affect attention and concentration test performance in a way that sometimes heightens focus and at other times disrupts focus. The MVP may be used to assess the effects of emotional distress on the consistency of an individual patient's attention and concentration test performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.