Recent advances in the field of humanoid robotics increase the complexity of the tasks that such robots can perform. This makes it increasingly difficult and inconvenient to program these tasks manually. Furthermore, humanoid robots, in contrast to industrial robots, should in the distant future behave within a social environment. Therefore, it must be possible to extend the robot's abilities in an easy and natural way. To address these requirements, this work investigates the topic of imitation learning of motor skills. The focus lies on providing a humanoid robot with the ability to learn new bimanual tasks through the observation of object trajectories. For this, an imitation learning framework is presented, which allows the robot to learn the important elements of an observed movement task by application of probabilistic encoding with Gaussian Mixture Models. The learned information is used to initialize an attractor-based movement generation algorithm that optimizes the reproduced movement towards the fulfillment of additional criteria, such as collision avoidance. Experiments performed with the humanoid robot ASIMO show that the proposed system is suitable for transferring information from a human demonstrator to the robot. These results provide a good starting point for more complex and interactive learning tasks.beverage from a bottle into a glass by observing a teacher demonstrating this task. This choice is arbitrary and the researched methods do not depend on this specific choice but are general. An overview of the whole imitation learning process is depicted in figure 1 and explained in detail within the upcoming sections.
In this paper we present a new robot control and learning system that allows a humanoid robot to extend its movement repertoire by learning from a human tutor. The focus is learning and imitating motor skills to move and position objects. We concentrate on two major aspects. First, the presented teaching and imitation scenario is fully interactive. A human tutor can teach the robot which is in turn able to integrate newly learned skills into different movement sequences online. Second, we combine a number of novel concepts to enhance the flexibility and generalization capabilities of the system. Generalization to new tasks is obtained by decoupling the learned movements from the robot's embodiment using a task space representation. It is chosen automatically from a commonly used task space pool. The movement descriptions are further decoupled from specific object instances by formulating them with respect to so-called linked objects. They act as references and can interactively be bound to real objects. When executing a learned task, a flexible kinematic description allows to change the robot's body schema online and thereby apply the learned movement relative to different body parts or Electronic supplementary material The online version of this article (doi:10.1007/s10514-011-9261-0) contains supplementary material, which is available to authorized users.M. Mühlig ( ) · M. Gienger new objects. An efficient optimization scheme adapts movements to such situations performing online obstacle and self-collision avoidance. Finally, all described processes are combined within a comprehensive architecture. To demonstrate the generalization capabilities we show experiments where the robot performs a movement bimanually in different environments, although the task was demonstrated by the tutor only one-handed.
Previous work [1] shows that the movement representation in task spaces offers many advantages for learning object-related and goal-directed movement tasks through imitation. It allows to reduce the dimensionality of the data that is learned and simplifies the correspondence problem that results from different kinematic structures of teacher and robot. Further, the task space representation provides a first generalization, for example wrt. differing absolute positions, if bi-manual movements are represented in relation to each other. Although task spaces are widely used, even if they are not mentioned explicitly, they are mostly defined a priori. This work is a step towards an automatic selection of task spaces. Observed movements are mapped into a pool of possibly even conflicting task spaces and we present methods that analyze this task space pool in order to acquire task space descriptors that match the observation best. As statistical measures cannot explain importance for all kinds of movements, the presented selection scheme incorporates additional criteria such as an attention-based measure. Further, we introduce methods that make a significant step from purely statistically-driven task space selection towards model-based movement analysis using a simulation of a complex human model. Effort and discomfort of the human teacher is being analyzed and used as a hint for important task elements. All methods are validated with realworld data, gathered using color tracking with a stereo vision system and a VICON motion capturing system.
Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.