2017
DOI: 10.1177/0022219417704636
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Students With and Without Reading Difficulties on Reading Comprehension Assessments: A Meta-Analysis

Abstract: Researchers have increasingly investigated sources of variance in reading comprehension test scores, particularly with students with reading difficulties (RD). The purpose of this meta-analysis was to determine if the achievement gap between students with RD and typically developing (TD) students varies as a function of different reading comprehension response formats (e.g., multiple choice, cloze). A systematic literature review identified 82 eligible studies. All studies administered reading comprehension as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
29
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(33 citation statements)
references
References 34 publications
2
29
0
2
Order By: Relevance
“…Valid, reliable and fair measurement of reasoning should provide information about a test taker's general intelligence level, which revealed as a strong predictive value not only for academic and professional success, but also for success in life and health (Carroll, 1993;Mittring and Rost, 2008;Danner et al, 2016;Schmidt et al, 2016). However, the robustness of test scores with reference to method bias has been questioned by past research, as, for example, by studies comparing an adaptive and a fixed item version of the same matrices test (Ortner and Caspers, 2011;Ortner et al, 2014) or by a study investigating achievement differences between students with and without reading difficulties in varying response formats (Collins et al, 2018). Identifying tests' psychometric features and test takers' personal and environmental characteristics that may contribute to the emergence of test bias is hence a highly relevant task for psychological research.…”
Section: Introductionmentioning
confidence: 99%
“…Valid, reliable and fair measurement of reasoning should provide information about a test taker's general intelligence level, which revealed as a strong predictive value not only for academic and professional success, but also for success in life and health (Carroll, 1993;Mittring and Rost, 2008;Danner et al, 2016;Schmidt et al, 2016). However, the robustness of test scores with reference to method bias has been questioned by past research, as, for example, by studies comparing an adaptive and a fixed item version of the same matrices test (Ortner and Caspers, 2011;Ortner et al, 2014) or by a study investigating achievement differences between students with and without reading difficulties in varying response formats (Collins et al, 2018). Identifying tests' psychometric features and test takers' personal and environmental characteristics that may contribute to the emergence of test bias is hence a highly relevant task for psychological research.…”
Section: Introductionmentioning
confidence: 99%
“…We tested two alternatives that have been discussed by researchers and practitioners in our study in Kenya, but there may be other possibilities that would be cost‐effective as well as provide richer data on reading comprehension, particularly at the very low reading levels that pervade low‐income and middle‐income countries. Research from the United States has concluded that the gap in reading comprehension between children with reading difficulties and other children depends on the type of assessment used (Collins, Lindström, & Compton, ). Future research should attempt to identify methods that are reliable and valid across subgroups as well as for the population as a whole.…”
Section: Discussionmentioning
confidence: 99%
“…In the United States, in a study of 97 first-to-tenth graders, Cutting and Scarborough (2006) analysed three widely used teststhe Wechsler Individual Achievement Tests (Wechsler, 1992), the Gates-MacGinitie Reading Test (MacGinitie, MacGinitie, Maria, & Dreyer, 2000) and the Gray Oral Reading Test (GORT; Wiederholt & Bryant, 1992) and reported inconsistencies between the tests in terms of identifying which children had comprehension difficulties. Other studies have corroborated that correlations between scores for reading comprehension assessments are surprisingly low and that different reading comprehension tests are inconsistent in their diagnoses (Colenbrander, Nickels, & Kohnen, 2017;Collins, Lindström, & Compton, 2018;Keenan et al, 2008;. For example, assessed 995 children (mean age 11.17 years) using four standardised reading comprehension tests: the GORT-3 (Wiederholt & Bryant, 1992), the Qualitative Reading Inventory-3 (QRI-3; Leslie & Caldwell, 2001), the Woodcock-Johnson Passage Comprehension-3 (WJPC-3;Woodcock, McGrew, & Mather, 2001) and the Peabody Individual Achievement Test (PIAT-3;Dunn & Markwardt, 1970).…”
Section: Reading Comprehension Test Differencesmentioning
confidence: 94%
“…To explain these differences, some researchers indicate that the different reading comprehension tests do not assess the same array of cognitive processes (Fletcher, 2006). Others state that the reading comprehension tests may differ in factors such as the presentation structure (e.g., whether the text is available while answering the questions, whether the text can be consulted, text length and question type) and in the way they are administered (e.g., multiple choice, open questions, short answers, retell and timed answers; for a review, see Collins et al, 2018). These factors may influence the reading comprehension scores obtained.…”
Section: Reading Comprehension Test Differencesmentioning
confidence: 99%