A year-long study of 131 second and third graders in 12 classrooms compared three daily 20-minute treatments. a) Fifty-eight students in six classrooms used the 1999-2000 version of Project LISTEN's Reading Tutor, a computer program that uses automated speech recognition to listen to a child read aloud, and gives spoken and graphical assistance. Students took daily turns using one shared Reading Tutor in their classroom while the rest of their class received regular instruction. b) Thirty-four students in the other six classrooms were pulled out daily for one-on-one tutoring by certified teachers. To control for materials, the human tutors used the same set of stories as the Reading Tutor. c) Thirty-nine students served as in-classroom controls, receiving regular instruction without tutoring. We compared students' preto post-test gains on the Word Identification, Word Attack, Word Comprehension, and Passage Comprehension subtests of the Woodcock Reading Mastery Test, and in oral reading fluency. Surprisingly, the human-tutored group significantly outgained the Reading Tutor group only in Word Attack (main effects p < .02, effect size .55). Third graders in both the computer-and human-tutored conditions outgained the control group significantly in Word Comprehension (p < .02, respective effect sizes .56 and .72) and suggestively in Passage Comprehension (p = .14, respective effect sizes .48 and .34). No differences between groups on gains in Word Identification or fluency were significant. These results are consistent with an earlier study in which students who used the 1998 version of the Reading Tutor outgained their matched classmates in Passage Comprehension (p = .11, effect size .60), but not in Word Attack, Word Identification, or fluency. To shed light on outcome differences between tutoring conditions and between individual human tutors, we compared process variables. Analysis of logs from all 6,080 human and computer tutoring sessions showed that human tutors included less rereading and more frequent writing than the Reading Tutor. Micro-analysis of 40 videotaped sessions showed that students who used the Reading Tutor spent considerable time waiting for it to respond, requested help more frequently, and picked easier stories when it was their turn. Human tutors corrected more errors, focused more on individual letters, and provided assistance more interactively, for example getting students to sound out words rather than sounding out words for students as the Reading Tutor did. EVALUATION OF AN AUTOMATED READING TUTOR THAT LISTENS / 63 EVALUATION OF AN AUTOMATED READING TUTOR THAT LISTENS / 65
Abstract. This paper presents the first statistically reliable empirical evidence from a controlled study for the effect of human-provided emotional scaffolding on student persistence in an intelligent tutoring system. We describe an experiment that added human-provided emotional scaffolding to an automated Reading Tutor that listens, and discuss the methodology we developed to conduct this experiment. Each student participated in one (experimental) session with emotional scaffolding, and in one (control) session without emotional scaffolding, counterbalanced by order of session. Each session was divided into several portions. After each portion of the session was completed, the Reading Tutor gave the student a choice: continue, or quit. We measured persistence as the number of portions the student completed. Human-provided emotional scaffolding added to the automated Reading Tutor resulted in increased student persistence, compared to the Reading Tutor alone. Increased persistence means increased time on task, which ought lead to improved learning. If these results for reading turn out to hold for other domains too, the implication for intelligent tutoring systems is that they should respond with not just cognitive support -but emotional scaffolding as well. Furthermore, the general technique of adding human-supplied capabilities to an existing intelligent tutoring system should prove useful for studying other ITSs too.
Analyzing the time allocation of students' activities in a schooldeployed mixed initiative tutor can be illuminating but surprisingly tricky. We discuss some complementary methods that we have used to understand how tutoring time is spent, such as analyzing sample videotaped sessions by hand, and querying a database generated from session logs. We identify issues, methods, and lessons that may be relevant to other tutors. One theme is that iterative design of "non-tutoring" components can enhance a tutor's effectiveness, not by improved teaching, but by reducing the time wasted on non-learning activities. Another is that it is possible to relate student's time allocation to improvements in various outcome measures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.