Computers are already powerful enough to sustain useful robots that interact and assist humans in every-day life. However progress requires a scientific shakedown in goals and methods not unlike the cognitive revolution that occurred 40 years ago. The document presents the origin and early steps of the RUBI/QRIO project, in which two humanoid robots, RUBI and QRIO, are being brought to an early childhood education center on a daily bases for a period of time of at least one year. The goal of the RUBI/QRIO project is to accelerate progress on everyday life interactive robots by addressing the problem at multiple levels, including the development of new scientific methods, formal approaches, and scientific agenda. The current focus of the project is on educational environments, exploring the ways in which this technology could be used to assist teachers and enrich the educational experiences of children. We describe the origins, philosophy and first steps of the project, which included immersion of the researchers in the Early Childhood Education Center at UCSD, development of a social robot prototype named RUBI, and daily field studies with RUBI and QRIO, a prototype humanoid developed by Sony.
The design and development of social robots that interact and assist people in daily life requires moving into unconstrained daily-life environments. This presents unexplored methodological challenges to robotic researchers. Is it possible, for example, to perform useful experiments in the uncontrolled conditions of everyday life environments? How long do these studies need to be to provide reliable results? What evaluations methods can be used?In this paper we present preliminary results on a study designed to evaluate an algorithm for social robots in relatively uncontrolled, daily life conditions. The study was conducted as part of the RUBI project, whose goal is to design and develop social robots by immersion in the environment in which the robots are supposed to operate. First we found that in spite of the relative chaotic conditions and lack of control existing in the daily activities of a child-care center, it is possible to perform experiments in a relatively short period of time and with reliable results. We found that continuous audience response methods borrowed from marketing research provided good inter-observer reliabilities, in the order of 70%, and temporal resolution (the cut-off frequency is in the order of 1 cycle per minute) at low cost (evaluation is performed continuously in real time). We also experimented with objective behavioral descriptions, like tracking children's movement across a room. These approaches complemented each other and provided a useful picture of the temporal dynamics of the child-robot interaction, allowing us to gather baseline data for evaluating future systems. Fi-
Abstract-This paper introduces the early stages of a study designed to understand the development of dance interactions between QRIO and toddlers in a classroom environment. The study is part of a project to explore the potential use of interactive robots as instructional tools in education. After 3 months observation period, we are starting the experiment. After explaining the experimental environment, component technologies used in it are described: an interactive dance with visual feedback, exploiting the active detection of contingency and robotic emotion expression.Index Terms-humanoid robot, QRIO, the RUBI/QRIO project, toddlers, long-term interaction, engaging interaction, interactive dance, contingency detection
Effective navigation depends upon reliable estimates of head direction (HD). Visual, vestibular, and outflow motor signals combine for this purpose in a brain system that includes dorsal tegmental nucleus, lateral mammillary nuclei, anterior dorsal thalamic nucleus, and the postsubiculum. Learning is needed to combine such different cues to provide reliable estimates of HD. A neural model is developed to explain how these three types of signals combine adaptively within the above brain regions to generate a consistent and reliable HD estimate, in both light and darkness, which explains the following experimental facts. Each HD cell is tuned to a preferred head direction. The cell's firing rate is maximal at the preferred direction and decreases as the head turns from the preferred direction. The HD estimate is controlled by the vestibular system when visual cues are not available. A well-established visual cue anchors the cell's preferred direction when the cue is in the animal's field of view. Distal visual cues are more effective than proximal cues for anchoring the preferred direction. The introduction of novel cues in either a novel or familiar environment can gain control over a cell's preferred direction within minutes. Turning out the lights or removing all familiar cues does not change the cell's firing activity, but it may accumulate a drift in the cell's preferred direction. The anticipated time interval (ATI) of the HD estimate is greater in early processing stages of the HD system than at later stages. The model contributes to an emerging unified neural model of how multiple processing stages in spatial navigation, including postsubiculum head direction cells, entorhinal grid cells, and hippocampal place cells, are calibrated through learning in response to multiple types of signals as an animal navigates in the world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.