It has been reasonably well established that test takers can, to varying degrees, answer some reading comprehension questions without reading the passages on which the questions are based, even for carefully constructed measures like the Scholastic Aptitude Test (SAT). The aim of this study was to determine what test-taking strategies examinees use, and which are related to test performance, when reading passages are not available. The research focused on reading comprehension questions similar to those that will be used in the revised SAT, to be introduced in 1994. The most often cited strategies involved choosing answers on the basis of consistency with other questions and reconstructing the main theme of a missing passage from all of the questions and answers in a set. These strategies were more likely to result in successful performance on individual test items than were any of many other possible (and less constructrelevant) strategies.
A communicative competence orientation was taken to study the validity of test-score inferences derived from the revised Test of Spoken English (TSE). To implement the approach, a sample of undergraduate students, primarily native speakers of English, provided a variety of reactions to, and judgements of, the test responses of a sample of TSE examinees. The TSE scores of these examinees, previously determined by official TSE raters, spanned the full range of TSE score levels. Undergraduate students were selected as ‘evaluators’ because they, more than most other groups, are likely to interact with TSE examinees, many of whom become teaching assistants. Student evaluations were captured by devising and administering a secondary listening test (SLT) to assess students’ understanding of TSE examinees’ speech, as represented by their taped responses to tasks on the TSE. The objective was to determine the degree to which official TSE scores are predictive of listeners’ ability to understand the messages conveyed by TSE examinees. Analyses revealed a strong association between TSE score levels and the judgements, reactions and understanding of listeners. This finding applied to all TSE tasks and to nearly all of the several different kinds of evaluations made by listeners. Along with other information, the evidence gathered here should help the TSE program meet professional standards for test validation. The procedures may also prove useful in future test-development efforts as a way of determining the difficulty of speaking tasks (and possibly writing tasks).
This report describes the practice analysis study of newly certified school psychologists. This project was conducted in cooperation with the National Association of School Psychologists (NASP). The report documents the methods used to define the performance domain of school psychologists, describes the types of statistical analyses conducted, reports the results of these analyses, and discusses the implications of these results for test development. The examination that results from this study will be used by NASP and many states to license and/or certify school psychologists.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.