In education research, there is a widely-cited result called "Bloom's two sigma" that characterizes the differences in learning outcomes between students who receive one-on-one tutoring and those who receive traditional classroom instruction [1]. Tutored students scored in the 95 th percentile, or two sigmas above the mean, on average, compared to students who received traditional classroom instruction. In human-robot interaction research, however, there is relatively little work exploring the potential benefits of personalizing a robot's actions to an individual's strengths and weaknesses. In this study, participants solved grid-based logic puzzles with the help of a personalized or non-personalized robot tutor. Participants' puzzle solving times were compared between two non-personalized control conditions and two personalized conditions (n=80). Although the robot's personalizations were less sophisticated than what a human tutor can do, we still witnessed a "one-sigma" improvement (68 th percentile) in post-tests between treatment and control groups. We present these results as evidence that even relatively simple personalizations can yield significant benefits in educational or assistive human-robot interactions.
Personalized education technologies capable of delivering adaptive interventions could play an important role in addressing the needs of diverse young learners at a critical time of school readiness. We present an innovative personalized social robot learning companion system that utilizes children’s verbal and nonverbal affective cues to modulate their engagement and maximize their long-term learning gains. We propose an affective reinforcement learning approach to train a personalized policy for each student during an educational activity where a child and a robot tell stories to each other. Using the personalized policy, the robot selects stories that are optimized for each child’s engagement and linguistic skill progression. We recruited 67 bilingual and English language learners between the ages of 4–6 years old to participate in a between-subjects study to evaluate our system. Over a three-month deployment in schools, a unique storytelling policy was trained to deliver a personalized story curriculum for each child in the Personalized group. We compared their engagement and learning outcomes to a Non-personalized group with a fixed curriculum robot, and a baseline group that had no robot intervention. In the Personalization condition, our results show that the affective policy successfully personalized to each child to boost their engagement and outcomes with respect to learning and retaining more target words as well as using more target syntax structures as compared to children in the other groups.
Though substantial research has been dedicated towards using technology to improve education, no current methods are as effective as one-on-one tutoring. A critical, though relatively understudied, aspect of effective tutoring is modulating the student's affective state throughout the tutoring session in order to maximize long-term learning gains. We developed an integrated experimental paradigm in which children play a second-language learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion. As part of the system, we measured children's valence and engagement via an automatic facial expression analysis system. These signals were combined into a reward signal that fed into the robot's affective reinforcement learning algorithm. Over several sessions, the robot played the game and personalized its motivational strategies (using verbal and non-verbal actions) to each student. We evaluated this system with 34 children in preschool classrooms for a duration of two months. We saw that (1) children learned new words from the repeated tutoring sessions, (2) the affective policy personalized to students over the duration of the study, and (3) students who interacted with a robot that personalized its affective feedback strategy showed a significant increase in valence, as compared to students who interacted with a non-personalizing robot. This integrated system of tablet-based educational content, affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.
Abstract-We deployed an autonomous social robotic learning companion in three preschool classrooms at an American public school for two months. Before and after this deployment, we asked the teachers and teaching assistants who worked in the classrooms about their views on the use of social robots in preschool education. We found that teachers' expectations about the experience of having a robot in their classrooms often did not match up with their actual experience. These teachers generally expected the robot to be disruptive, but found that it was not, and furthermore, had numerous positive ideas about the robot's potential as a new educational tool for their classrooms. Based on these interviews, we provide a summary of lessons we learned about running child-robot interaction studies in preschools. We share some advice for future researchers who may wish to engage teachers and schools in the course of their own human-robot interaction work. Understanding the teachers, the classroom environment, and the constraints involved is especially important for microgenetic and longitudinal studies, which require more of the school's time-as well as more of the researchers' time-and is a greater opportunity investment for everyone involved.
Abstract-Tega is a new expressive "squash and stretch", Androidbased social robot platform, designed to enable long-term interactions with children. I. A NEW SOCIAL ROBOT PLATFORMTega is the newest social robot platform designed and built by a diverse team of engineers, software developers, and artists at the Personal Robots Group at the MIT Media Lab. This robot, with its furry, brightly colored appearance, was developed specifically to enable long-term interactions with children.Tega comes from a line of Android-based robots that leverage smartphones to drive computation and display an animated face [1]- [3]. The phone runs software for behavior control, motor control, and sensor processing. The phone's abilities are augmented with an external high-definition camera mounted in the robot's forehead and a set of on-board speakers.Tega's motion was inspired by "squash and stretch" principles of animation [4], creating natural and organic motion while keeping the actuator count low. Tega has five degrees of freedom: head up/down, waist-tilt left/right, waist-lean forward/back, full-body up/down, and full-body left/right. These joints are combinatorial and allow the robot to express behaviors consistently, rapidly, and reliably.The robot can run autonomously or can be remote-operated by a person through a teleoperation interface. The robot can operate on battery power for up to six hours before needing to be recharged, which allows for easier testing in the field. To that end, Tega was the robot platform used in a recent two-month study on second language learning conducted in three public school classrooms [5], [6].A variety of facial expressions and body motions can be triggered on the robot, such as laughter, excitement, and frustration. Additional animations can be developed on a computer model of the robot and exported via a software pipeline to a set of motor commands that can be executed on the physical robot, thus enabling rapid development of new expressive behaviors. Speech can be played back from prerecorded audio tracks, generated on the fly with a text-to-speech system, or streamed to the robot via a real-time voice streaming and pitch-shifting interface.This video showcases the Tega robot's design and implementation. It is a first look at the robot's capabilities as a research platform. The video highlights the robot's motion, expressive capabilities, and its use in ongoing studies of child-robot interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.