2021
DOI: 10.1080/23279095.2020.1860987
|View full text |Cite
|
Sign up to set email alerts
|

Test-retest reliability on the Cambridge Neuropsychological Test Automated Battery: Comment on Karlsen et al. (2020)

Abstract: Test-retest reliability is essential to the development and validation of psychometric tools. Here we respond to the article by Karlsen et al. (Applied Neuropsychology: Adult, 2020), reporting testretest reliability on the Cambridge Neuropsychological Test Automated Battery (CANTAB), with results that are in keeping with prior research on CANTAB and the broader cognitive assessment literature. However, after adopting a high threshold for adequate test-retest reliability, the authors report inadequate reliabili… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…However, the general findings procured from Zhang's meta-analysis may not be likewise upheld in the case of the role of age in moderating the relationship between two different time measures on language anxiety. Psychometrically, the test-retest reliability of a traitlike construct is expected to be high and robust (Skirrow et al, 2022). As Skirrow et al theorize, ''[w]ith stable trait-like constructs, test-retest assessments using appropriately sensitive measures are likely to yield test-retest reliabilities, since within-individual variance is minimized'' (p. 2).…”
Section: Language Anxiety: Stability Variability and Hypothesesmentioning
confidence: 99%
“…However, the general findings procured from Zhang's meta-analysis may not be likewise upheld in the case of the role of age in moderating the relationship between two different time measures on language anxiety. Psychometrically, the test-retest reliability of a traitlike construct is expected to be high and robust (Skirrow et al, 2022). As Skirrow et al theorize, ''[w]ith stable trait-like constructs, test-retest assessments using appropriately sensitive measures are likely to yield test-retest reliabilities, since within-individual variance is minimized'' (p. 2).…”
Section: Language Anxiety: Stability Variability and Hypothesesmentioning
confidence: 99%
“…However, there is no gold standard for cognitive screening (Hachinski et al, 2006;Quinn et al, 2018;Quinn et al, 2021), with measures needing to cover a broad range of cognitive domains, be valid, feasible and sensitive for identification of impairments post-stroke (Chan et al, 2017;Chan et al, 2014;Stolwyk et al, 2014). They also only provide a 'snapshot' at a single point in time and should be responsive to change for monitoring of cognitive recovery (Skirrow et al, 2021), making it difficult for one tool to meet all requirements. Exploring new technologies could provide alternatives for measuring cognition in acute stroke.…”
Section: Introductionmentioning
confidence: 99%
“…Computerised cognitive assessments offer sensitive continuous measures that can be customised for select subtests and repeated to mark changes over short epochs of cognitive recovery (Aslam et al, 2018;Pettigrew et al, 2021;Zygouris & Tsolaki, 2015), such as the acute phase post-stroke (Bernhardt et al, 2017). Computerised cognitive assessment platforms are feasible as research measures acutely post-stroke (Cumming et al, 2012;Shopin et al, 2013) and are designed for serial measurement of cognition over short time intervals (Cambridge Campos-Magdaleno et al, 2021;Cognition, 2022;Skirrow et al, 2021), but have not been used in both capacities in the acute post-stroke period. We aimed to map the trajectory of cognitive recovery during the first week post-stroke and up to 90-day follow-up using serial computerised cognitive assessment.…”
Section: Introductionmentioning
confidence: 99%
“…However, there is no gold standard for cognitive screening (Hachinski et al, 2006; Quinn et al, 2018; Quinn et al, 2021), with measures needing to cover a broad range of cognitive domains, be valid, feasible and sensitive for identification of impairments post-stroke (Chan et al, 2017; Chan et al, 2014; Stolwyk et al, 2014). They also only provide a ‘snapshot’ at a single point in time and should be responsive to change for monitoring of cognitive recovery (Skirrow et al, 2021), making it difficult for one tool to meet all requirements. Exploring new technologies could provide alternatives for measuring cognition in acute stroke.…”
Section: Introductionmentioning
confidence: 99%
“…Computerised cognitive assessments offer sensitive continuous measures that can be customised for select subtests and repeated to mark changes over short epochs of cognitive recovery (Aslam et al, 2018; Pettigrew et al, 2021; Zygouris & Tsolaki, 2015), such as the acute phase post-stroke (Bernhardt et al, 2017). Computerised cognitive assessment platforms are feasible as research measures acutely post-stroke (Cumming et al, 2012; Shopin et al, 2013) and are designed for serial measurement of cognition over short time intervals (Cambridge Campos-Magdaleno et al, 2021; Cognition, 2022; Skirrow et al, 2021), but have not been used in both capacities in the acute post-stroke period. We aimed to map the trajectory of cognitive recovery during the first week post-stroke and up to 90-day follow-up using serial computerised cognitive assessment.…”
Section: Introductionmentioning
confidence: 99%