One hundred forty-four high school students received either no feedback, immediate feedback, or delayed feedback following a 20ilem multiple-choice test covering a meaningful passage. Presence or absence of feedback did not affect the probability of being right on a 1-week retention test, given a right answer on the initial test. As expected, however, when the measure was the probability of being right on the retention test, given a wrong answer on the initial test, feedback proved significantly better than no feedback, and delayed feedback proved superior to immediate feedback. The results show that the delay -retention effect occurs under conditions approximating those of real instruction and confirm the interfcrence-perseveration interpretation of the phenomenon.
According to previous work, monitoring failure or an illusion of knowing is said to occur when a reader's self-assessment of comprehension is high but an objective measure indicates comprehension failure. Though passages with contradictions inserted have been used in previous work, we used unadulterated expository text in the present experiment. College students read either a difficult or an easy expository passage under instructions intended to elicit either deep processing or relatively shallow (but still semantic) processing. The illusion of knowing occurred primarily when the reading level of the passage was difficult and the instructions cued a relatively shallow level of processing. Readers who exhibited an illusion of knowing tended to have shown distortions in their passage summaries, whereas subjects who knew that they had failed to comprehend were more likely to have omitted information relevant to the main point.
Many teachers and curriculum specialists claim that the reading demand of many mathematics items is so great that students do not perform well on mathematics tests, even though they have a good understanding of mathematics. The purpose of this research was to test this claim empirically. This analysis was accomplished by considering examinees that differed in reading ability within the context of a multidimensional DIF framework. Results indicated that student performance on some mathematics items was influenced by their level of reading ability so that examinees with lower proficiency classifications in reading were less likely to obtain correct answers to these items. This finding suggests that incorrect proficiency classifications may have occurred for some examinees. However, it is argued that rather than eliminating these mathematics items from the test, which would seem to decrease the construct validity of the test, attempts should be made to control the confounding effect of reading that is measured by some of the mathematics items.Validity has been described as scientific inquiry into test score meaning (Messick, 1989). In order for large-scale mathematics achievement tests to be valid, the scores obtained from such tests must be reflective of the mathematics curriculum or content domain. The results obtained from such tests are often used to determine student proficiency in mathematics and clearly if such scores are not reflective of the content domain then proficiency classifications based on these scores will be inaccurate. For example, suppose a large-scale test in mathematics Correspondence should be addressed to Cindy M. Walker,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.