A major challenge in robotics is the ability to learn, from novel experiences, new behavior that is useful for achieving new goals and skills. Autonomous systems must be able to learn solely through the environment, thus ruling out a priori task knowledge, tuning, extensive training, or other forms of pre-programming. Learning must also be cumulative and incremental, as complex skills are built on top of primitive skills. Additionally, it must be driven by intrinsic motivation because formative experience is gained through autonomous activity, even in the absence of extrinsic goals or tasks. This paper presents an approach to these issues through robotic implementations inspired by the learning behavior of human infants. We describe an approach to developmental learning and present results from a demonstration of longitudinal development on an iCub humanoid robot. The results cover the rapid emergence of staged behavior, the role of constraints in development, the effect of bootstrapping between stages, and the use of a schema memory of experiential fragments in learning new skills. The context is a longitudinal experiment in which the robot advanced from uncontrolled motor babbling to skilled hand/eye integrated reaching and basic manipulation of objects. This approach offers promise for further fast and effective sensory-motor learning techniques for robotic learning.
In aiming for advanced robotic systems that autonomously and permanently readapt to changing and uncertain environments, we introduce a scheme of fast learning and readaptation of robotic sensorimotor mappings based on biological mechanisms underpinning the development and maintenance of accurate human reaching. The study presents a range of experiments, using two distinct computational architectures, on both learning and realignment of robotic hand-eye coordination. Analysis of the results provide insights into the putative parameters and mechanisms required for fast readaptation and generalization from both a robotic and biological perspective.
Gaze control requires the coordination of movements of both eyes and head to fixate on a target. Using our biologically constrained architecture for gaze control we show how the relationships between the coupled sensorimotor systems can be learnt autonomously from scratch, allowing for adaptation as the system grows or changes. Infant studies suggest developmental learning strategies, which can be applied to sensorimotor learning in humanoid robots. We examine environmental constraints for the learning of eye and head coupled mappings, and give results from implementations on an iCub robot. The results show the impact of these constraints and how they can be overcome to benefit the development of fast, cumulative, on-line learning of coupled sensorimotor systems.
Shaw, P. H., Law, J. A., Lee, M. H. (2014). A comparison of learning strategies for biologically constrained development of gaze control on an iCub robot. Autonomous Robots, 37 (1), 97-110Gaze control requires the coordination of movements of both eyes and head to fixate on a target. We present a biologically constrained architecture for gaze control and show how the relationships between the coupled sensorimotor systems can be learnt autonomously from scratch, allowing for adaptation as the system grows or changes. Infant studies suggest developmental learning strategies, which can be applied to sensorimotor learning in humanoid robots. We examine two strategies (sequential and synchronous) for the learning of eye and head coupled mappings, and give results from implementations on an iCub robot. The results show that the developmental approach can give fast, cumulative, online learning of coupled sensorimotor systems.preprintPeer reviewe
Eye fixation and gaze fixation patterns in general play an important part when humans interact with each other. Also, gaze fixation patterns of humans are highly determined by the task they perform. Our assumption is that meaningful human–robot interaction with robots having active vision components (such a humanoids) is highly supported if the robot system is able to create task modulated fixation patterns. We present an architecture for a robot active vision system equipped with one manipulator where we demonstrate the generation of task modulated gaze control, meaning that fixation patterns are in accordance with a specific task the robot has to perform. Experiments demonstrate different strategies of multi-modal task modulation for robotic active vision where visual and nonvisual features (tactile feedback) determine gaze fixation patterns. The results are discussed in comparison to purely saliency based strategies toward visual attention and gaze control. The major advantages of our approach to multi-modal task modulation is that the active vision system can generate, first, active avoidance of objects, and second, active engagement with objects. Such behaviors cannot be generated by current approaches of visual attention which are based on saliency models only, but they are important for mimicking human-like gaze fixation patterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.