Children and their parents may undergo challenging experiences when admitted for inpatient care at pediatric hospitals. While most hospitals make efforts to provide socio-emotional support for patients and their families during care, gaps still exist between human resource supply and demand. The Huggable project aims to close this gap by creating a social robot able to mitigate stress, anxiety, and pain in pediatric patients by engaging them in playful interactive activities. In this paper, we introduce a larger experimental design to compare the effects of the Huggable robot to a virtual character on a screen and a plush teddy bear, and provide initial qualitative analyses of patients' and parents' behaviors during intervention sessions collected thus far. We demonstrate preliminarily that children are more eager to emotionally connect with and be physically activated by a robot than a virtual character, illustrating the potential of social robots to provide socio-emotional support during inpatient pediatric care.
Though substantial research has been dedicated towards using technology to improve education, no current methods are as effective as one-on-one tutoring. A critical, though relatively understudied, aspect of effective tutoring is modulating the student's affective state throughout the tutoring session in order to maximize long-term learning gains. We developed an integrated experimental paradigm in which children play a second-language learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion. As part of the system, we measured children's valence and engagement via an automatic facial expression analysis system. These signals were combined into a reward signal that fed into the robot's affective reinforcement learning algorithm. Over several sessions, the robot played the game and personalized its motivational strategies (using verbal and non-verbal actions) to each student. We evaluated this system with 34 children in preschool classrooms for a duration of two months. We saw that (1) children learned new words from the repeated tutoring sessions, (2) the affective policy personalized to students over the duration of the study, and (3) students who interacted with a robot that personalized its affective feedback strategy showed a significant increase in valence, as compared to students who interacted with a non-personalizing robot. This integrated system of tablet-based educational content, affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.
Abstract-We deployed an autonomous social robotic learning companion in three preschool classrooms at an American public school for two months. Before and after this deployment, we asked the teachers and teaching assistants who worked in the classrooms about their views on the use of social robots in preschool education. We found that teachers' expectations about the experience of having a robot in their classrooms often did not match up with their actual experience. These teachers generally expected the robot to be disruptive, but found that it was not, and furthermore, had numerous positive ideas about the robot's potential as a new educational tool for their classrooms. Based on these interviews, we provide a summary of lessons we learned about running child-robot interaction studies in preschools. We share some advice for future researchers who may wish to engage teachers and schools in the course of their own human-robot interaction work. Understanding the teachers, the classroom environment, and the constraints involved is especially important for microgenetic and longitudinal studies, which require more of the school's time-as well as more of the researchers' time-and is a greater opportunity investment for everyone involved.
Abstract-Tega is a new expressive "squash and stretch", Androidbased social robot platform, designed to enable long-term interactions with children. I. A NEW SOCIAL ROBOT PLATFORMTega is the newest social robot platform designed and built by a diverse team of engineers, software developers, and artists at the Personal Robots Group at the MIT Media Lab. This robot, with its furry, brightly colored appearance, was developed specifically to enable long-term interactions with children.Tega comes from a line of Android-based robots that leverage smartphones to drive computation and display an animated face [1]- [3]. The phone runs software for behavior control, motor control, and sensor processing. The phone's abilities are augmented with an external high-definition camera mounted in the robot's forehead and a set of on-board speakers.Tega's motion was inspired by "squash and stretch" principles of animation [4], creating natural and organic motion while keeping the actuator count low. Tega has five degrees of freedom: head up/down, waist-tilt left/right, waist-lean forward/back, full-body up/down, and full-body left/right. These joints are combinatorial and allow the robot to express behaviors consistently, rapidly, and reliably.The robot can run autonomously or can be remote-operated by a person through a teleoperation interface. The robot can operate on battery power for up to six hours before needing to be recharged, which allows for easier testing in the field. To that end, Tega was the robot platform used in a recent two-month study on second language learning conducted in three public school classrooms [5], [6].A variety of facial expressions and body motions can be triggered on the robot, such as laughter, excitement, and frustration. Additional animations can be developed on a computer model of the robot and exported via a software pipeline to a set of motor commands that can be executed on the physical robot, thus enabling rapid development of new expressive behaviors. Speech can be played back from prerecorded audio tracks, generated on the fly with a text-to-speech system, or streamed to the robot via a real-time voice streaming and pitch-shifting interface.This video showcases the Tega robot's design and implementation. It is a first look at the robot's capabilities as a research platform. The video highlights the robot's motion, expressive capabilities, and its use in ongoing studies of child-robot interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.