A major challenge in modern robotics is to liberate robots from controlled industrial settings, and allow them to interact with humans and changing environments in the real-world. The current research attempts to determine if a neurophysiologically motivated model of cortical function in the primate can help to address this challenge. Primates are endowed with cognitive systems that allow them to maximize the feedback from their environment by learning the values of actions in diverse situations and by adjusting their behavioral parameters (i.e., cognitive control) to accommodate unexpected events. In such contexts uncertainty can arise from at least two distinct sources – expected uncertainty resulting from noise during sensory-motor interaction in a known context, and unexpected uncertainty resulting from the changing probabilistic structure of the environment. However, it is not clear how neurophysiological mechanisms of reinforcement learning and cognitive control integrate in the brain to produce efficient behavior. Based on primate neuroanatomy and neurophysiology, we propose a novel computational model for the interaction between lateral prefrontal and anterior cingulate cortex reconciling previous models dedicated to these two functions. We deployed the model in two robots and demonstrate that, based on adaptive regulation of a meta-parameter β that controls the exploration rate, the model can robustly deal with the two kinds of uncertainties in the real-world. In addition the model could reproduce monkey behavioral performance and neurophysiological data in two problem-solving tasks. A last experiment extends this to human–robot interaction with the iCub humanoid, and novel sources of uncertainty corresponding to “cheating” by the human. The combined results provide concrete evidence for the ability of neurophysiologically inspired cognitive systems to control advanced robots in the real-world.
One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a "shared plan" -which defines the interlaced actions of the two cooperating agents -in real time, and even to negotiate this shared plan during its execution.In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the system's ability to allow a Nao humanoid robot to learn a non-trivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.Index Terms-cooperation, humanoid robot, spoken language interaction, shared plans, situated and social learning.
Understanding the world involves extracting the regularities that define the interaction of the behaving organism within this world, and computing the statistical structure characterizing these regularities. This can be based on contingencies of phenomena at various scales ranging from correlations between sensory signals (e.g., motor-proprioceptive loops) to high-level conceptual links (e.g., vocabulary grounding). Multiple cortical areas contain neurons whose receptive fields are tuned for signals co-occurring in multiple modalities. Moreover, the hierarchical organization of the cortex, described within the Convergence Divergence Zone framework, defines an ideal architecture to extract and make use of contingency at increasing levels of complexity. We present an artificial neural network model of the early cortical amodal computations, which we have demonstrated on the humanoid robot iCub. This model explains and predicts findings in neurophysiology and neuropsychology along with being an efficient tool to control the robot. In particular, through exploratory use of the body, the system learns a form of body schema in terms of specific modalities (e.g., arm proprioception, gaze proprioception, vision) and their multimodal contingencies. Once multimodal contingencies have been learned, the system is capable of generating and exploiting internal representations or mental images based on inputs in one of these multiple dimensions. The system thus provides insight on a possible neural substrate for mental imagery within the context of multimodal convergence.
Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.Index Terms-Computer-supported cooperative work, distributed representations, knowledge acquisition, knowledge base management, knowledge management, natural language, natural language interfaces, psychology, robotics, user interface management systems, vision and scene understanding, voice I/O, web-based interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.