Little evidence is available regarding the differential impact of reading versus reading and writing on multiple source comprehension. The present study aims to: (1) compare the inferential comprehension performance of students in reading versus reading/synthesis conditions; (2) explore the impact of performing the tasks on paper versus on screen with Read&Answer (R&A) software; and (3) explore the extent to which rereading, notetaking, and the quality of the written synthesis can explain student’s comprehension scores. For the students in the synthesis condition, we also examined the relationship between the quality of the synthesis they produced and the comprehension they achieved. 155 psychology undergraduates were randomly assigned either to the reading (n = 78) or to the reading/synthesis condition (n = 77). From this sample, 79 participants carried out the task with the Read&Answer software, and 76 solved the task on paper. All the students took a prior knowledge questionnaire, and read three complementary texts about the conception of intelligence. Students in the reading condition answered an inferential comprehension test, whereas students in the synthesis condition were asked to write a synthesis before taking the same test. Results show no differences in comprehension between students in the four conditions (task and media). There was no significant association between rereading and task condition. However, students in the synthesis condition were more likely to take notes. We found that two of the categories for the quality of the synthesis, textual organization and accuracy of content had an impact on inferential comprehension for the participants who wrote it. The quality of the synthesis mediated between student’s prior knowledge and inferential comprehension.
Studies have shown that inferential questions encourage a more in‐depth understanding of texts and that students need to learn appropriate strategies for answering them, particularly when they deal with multiple texts. In this experimental study, the authors aimed to improve eighth‐grade students’ (13‐ to 14‐years old) ability to answer intra‐ and intertextual inferential questions when they read one or multiple complementary texts. The intervention was implemented by a group of middle‐school history teachers. Teachers in both the intervention and control groups (IG and CG, respectively) taught the same teaching unit using the same reading materials. However, teachers in the IG participated in 12 hours of professional development seminars on analysis of their classroom practice and how to improve their questioning strategies. Post‐intervention results revealed that students in the IG were significantly better than those in the CG at answering intra‐ and intertextual inferential questions. This difference was maintained at follow‐up (2 months after finishing the intervention). Students in the IG also performed better than those in the CG at a learning test. These results confirm the value of teaching students how to answer complex questions, especially when they refer to more than one text. The findings also support the value of the professional development program that enables teachers to reflect on their practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.