2017
DOI: 10.1177/0018720817747731
|View full text |Cite
|
Sign up to set email alerts
|

Establishing Measurement Equivalence Across Computer- and Paper-Based Tests of Spatial Cognition

Abstract: Objective The purpose of the present research is to establish measurement equivalence and test differences in reliability between computerized and pencil-and-paper-based tests of spatial cognition. Background Researchers have increasingly adopted computerized test formats, but few attempt to establish equivalence for computer-based and paper-based tests. The mixed results in the literature on the test mode effect, which occurs when performance differs as a function of test medium, highlight the need to test fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…Strong arguments have been made that digital adaptations of cognitive tests, just as any other new cognitive test, should be submitted to scrutinous validation and reliability studies (Bauer et al, 2012;American Educational Research Association et al, 2014). However, evidence for validity and reliability is still lacking for many digital tests (Schlegel & Gilliland, 2007;Schmand, 2019;Wild et al, 2008), and existing evidence is often mixed (e.g., Arrieux et al, 2017;Bailey et al, 2018;Björngrim et al, 2019;Cole et al, 2018;Daniel, Wahlstrom, & Zhang, 2014;Gualtieri & Johnson, 2006;Morrison et al, 2018).…”
mentioning
confidence: 99%
“…Strong arguments have been made that digital adaptations of cognitive tests, just as any other new cognitive test, should be submitted to scrutinous validation and reliability studies (Bauer et al, 2012;American Educational Research Association et al, 2014). However, evidence for validity and reliability is still lacking for many digital tests (Schlegel & Gilliland, 2007;Schmand, 2019;Wild et al, 2008), and existing evidence is often mixed (e.g., Arrieux et al, 2017;Bailey et al, 2018;Björngrim et al, 2019;Cole et al, 2018;Daniel, Wahlstrom, & Zhang, 2014;Gualtieri & Johnson, 2006;Morrison et al, 2018).…”
mentioning
confidence: 99%
“…Specifically, with respect to the predicted scaled scores with high correlations to the norm sample, observed scores that are smaller than predicted scores in the tablet-administered K-RBANS suggest that participants’ performance in word list and story recall partially meet the expected level. However, given that some psychological tests that were administered digitally showed format effects at this level [ 2 , 4 , 7 , 8 ], the results that showed low format effects of the tablet-administered K-RBANS appeared to be acceptable. In particular, the average residuals of delayed memory showed a negative value, indicating that the predicted scaled scores were higher than the observed scaled scores.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, it is necessary to provide students an opportunity to practice before the test so that problems that may occur during the test can be anticipated and addressed. Nonetheless, for students who have become accustomed to online classes since the onset of COVID-19 pandemic, using a tablet computer has become routine [ 28 , 29 ]. The tablet computers used by UBT do not differ from tablet computers used by students for their studies; therefore, less than 10 min of education before the test may be sufficient.…”
Section: Discussionmentioning
confidence: 99%