During development, infants learn to differentiate their motor behaviors relative to various contexts by exploring and identifying the correct structures of causes and effects that they can perform; these structures of actions are called task sets or internal models. The ability to detect the structure of new actions, to learn them and to select on the fly the proper one given the current task set is one great leap in infants cognition. This behavior is an important component of the child's ability of learning-to-learn, a mechanism akin to the one of intrinsic motivation that is argued to drive cognitive development. Accordingly, we propose to model a dual system based on (1) the learning of new task sets and on (2) their evaluation relative to their uncertainty and prediction error. The architecture is designed as a two-level-based neural system for context-dependent behavior (the first system) and task exploration and exploitation (the second system). In our model, the task sets are learned separately by reinforcement learning in the first network after their evaluation and selection in the second one. We perform two different experimental setups to show the sensorimotor mapping and switching between tasks, a first one in a neural simulation for modeling cognitive tasks and a second one with an arm-robot for motor task learning and switching. We show that the interplay of several intrinsic mechanisms drive the rapid formation of the neural populations with respect to novel task sets.
The so-called self-other correspondence problem in imitation demands to find the transformation that maps the motor dynamics of one partner to our own. This requires a general purpose sensorimotor mechanism that transforms an external fixation-point (partner's shoulder) reference frame to one's own body-centered reference frame. We propose that the mechanism of gain-modulation observed in parietal neurons may generally serve these types of transformations by binding the sensory signals across the modalities with radial basis functions (tensor products) on the one hand and by permitting the learning of contextual reference frames on the other hand. In a shoulder-elbow robotic experiment, gain-field neurons (GF) intertwine the visuo-motor variables so that their amplitude depends on them all. In situations of modification of the body-centered reference frame, the error detected in the visuo-motor mapping can serve then to learn the transformation between the robot's current sensorimotor space and the new one. These situations occur for instance when we turn the head on its axis (visual transformation), when we use a tool (body modification), or when we interact with a partner (embodied simulation). Our results defend the idea that the biologically-inspired mechanism of gain modulation found in parietal neurons can serve as a basic structure for achieving nonlinear mapping in spatial tasks as well as in cooperative and social functions.
It is known that during early infancy, humans experience many physical and cognitive changes that shape their learning and refine their understanding of objects in the world. With the extended arm being one of the very first objects they familiarise, infants undergo a series of developmental stages that progressively facilitate physical interactions, enrich sensory information and develop the skills to learn and recognise. Drawing inspiration from infancy, this study deals with the modelling of an open-ended learning mechanism for embodied agents that considers the cumulative and increasing complexity of physical interactions with the world. The proposed system achieves object perception, and recognition as the agent (i.e., a humanoid robot) matures, experiences changes to its visual capabilities, develops sensorimotor control, and interacts with objects within its reach. The reported findings demonstrate the critical role of developing vision on the effectiveness of object learning and recognition and the importance of reaching and grasping in solving visually elicited ambiguities. Impediments caused by the interdependency of parallel components responsible for the agent's physical and cognitive functionalities are exposed, demonstrating an interesting phase transition in utilising object perceptions for recognition.
This paper proposes a computational model for learning robot control and sequence planning based on the ideomotor principle. This model encodes covariation laws between sensors and motors in a modular fashion and exploits these primitive skills to build complex action sequences, potentially involving tool-use. Implemented for a robotic arm, the model starts with raw unlabelled sensor and motor vectors and autonomously assigns functions to neutral objects in the environment. Our experimental evaluation highlights the emergent properties of such a modular system and we discuss their consequences from ideomotor and sensorimotor-theoretic perspectives.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.