This study assessed the contributions of various test features (passage variables, question types, and format variables) to reading comprehension performance for successful and unsuccessful readers. Items from a typical standardized reading comprehension test were analyzed according to 20 predictor test features. A three-stage conditional regression approach assessed the predictability of these features on item-difficulty scores for the two reader groups. Two features, location of response information and stem length, accounted for a significant amount of explained variance for both groups. Possible explanatory hypotheses are considered and implications are drawn for improved test design as well as for further research concerning interactions between assessment task features and reader performance.A CHALLENGE TO RESEARCHERS and practitioners over the years has been the accurate assessment of reading comprehension processes. Reading comprehension presents special assessment challenges due to complex interactions between reader, text, and task (see Johnston, 1983, for a review). The most commonly used tool for the formal assessment of reading comprehension remains the multiple-choice question tapping information presented through rather brief, intact passages. Educational decisions concerning students' placements and programs are influenced heavily by performance on such standardized measures, and the assumption is often made that all students' performance scores represent the same valid manifestation of the latent construct, comprehension. However, it has been suggested that the validity of such assessments can be affected by certain features of the test itself, such as passage variables (e.g.