Sixty-three Woodcock–Johnson IV Tests of Achievement protocols, administered by 26 school psychology trainees, were examined to determine the frequency of examiner errors. Errors were noted on all protocols and ranged from 8 to 150 per administration. Critical (e.g., start, stop, and calculation) errors were noted on roughly 97% of protocols. Wilcoxon signed-rank tests indicated multiple subtests were more prone to both critical and non-critical (e.g., failure to record answers verbatim, failure to record qualitative observations) errors; critical errors were generally more common on subtests with objective scoring criteria (i.e., Written Expression and Spelling) and non-critical errors were more frequently observed on subtests that required the recording of answers verbatim. Based on these findings, we encourage trainers to place increased scrutiny on trainee’s objective scoring performance and on requiring recording responses verbatim. Areas of needed future research studies are also discussed.
The last comprehensive study to examine the assessment practices promoted by school psychology programs was published 25 years ago (i.e., Wilson & Reschly, 1996). Since then, significant changes to assessment theory and practice have occurred. Data from a 2020 survey of directors of school psychology programs were collected to gain an understanding of current graduate training in test use and assessment. Results were compared to a current survey of practitioners as well as past surveys of trainers. Results indicate that the assessment instruments used most frequently by practitioners tend to be those that are strongly emphasized in training programs. There were significant changes over time, most notably a large increase in the extent to which programs emphasize rating scales. Programs continue to strongly emphasize standardized, normreferenced tests, particularly tests of cognitive abilities and academic achievement. Programs also continue to emphasize behavioral observation methods. In contrast to our expectations, results also reveal a persistent emphasis on low-value instruments such as projective tests. The implications of these findings for training and practice are discussed.
Eighty Woodcock–Johnson IV Tests of Achievement protocols from 40 test administrators were examined to determine the types and frequencies of administration and scoring errors made. Non-critical errors (e.g., failure to record verbatim) were found on every protocol ( M = 37.2). Critical (e.g., standard score, start point) errors were found on 98.8% of protocols ( M = 15.3). Additionally, a series of paired samples t-test were conducted to determine differences in total, critical, and non-critical errors pre- and during-COVID-19. No statistic differences were found. Our findings add to a growing body of research that suggests that errors on norm-referenced tests of achievement are pervasive. However, the frequency of errors did not appear to be affected by COVID-19 stressors or social distancing requirements. Implications of these findings for training and practice are discussed. Suggestions for future research are also provided.
Eighty Woodcock–Johnson IV Tests of Achievement protocols from 40 test administrators were examined to determine the types and frequencies of administration and scoring errors made. Non-critical errors (e.g., failure to record verbatim) were found on every protocol (M = 37.2). Critical (e.g., standard score, start point) errors were found on 98.8% of protocols (M = 15.3). Additionally, a series of paired samples t-test were conducted to determine differences in total, critical, and non-critical errors pre- and during-COVID-19. No statistic differences were found. Our findings add to a growing body of research that suggests that errors on norm-referenced tests of achievement are pervasive. However, the frequency of errors did not appear to be affected by COVID-19 stressors or social distancing requirements. Implications of these findings for training and practice are discussed. Suggestions for future research are also provided.
The last comprehensive study to examine the assessment practices promoted by schoolpsychology programs was published 25 years ago (i.e., Wilson & Reschly, 1996). Since then,significant changes to assessment theory and practice have occurred. Data from a 2020 survey ofdirectors of school psychology programs were collected to gain an understanding of currentgraduate training in test use and assessment. Results were compared to a current survey ofpractitioners as well as past surveys of trainers. Results indicate that the assessment instrumentsused most frequently by practitioners tend to be those that are strongly emphasized in trainingprograms. There were significant changes over time, most notably a large increase in the extentto which programs emphasize rating scales. Programs continue to strongly emphasizestandardized, norm-referenced tests, particularly tests of cognitive abilities and academicachievement. Programs also continue to emphasize behavioral observation methods. In contrastto our expectations, results also reveal a persistent emphasis on low-value instruments such asprojective tests. The implications of these findings for training and practice are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.