Abstract-This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in humanrobot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.
Abstract-Learning and perception from multiple sensory modalities are crucial processes for the development of intelligent systems capable of interacting with humans. We present an integrated probabilistic framework for perception, learning and memory in robotics. The core component of our framework is a computational Synthetic Autobiographical Memory model which uses Gaussian Processes as a foundation and mimics the functionalities of human memory. Our memory model, that operates via a principled Bayesian probabilistic framework, is capable of receiving and integrating data flows from multiple sensory modalities, which are combined to improve perception and understanding of the surrounding environment. To validate the model, we implemented our framework in the iCub humanoid robotic, which was able to learn and recognise human faces, arm movements and touch gestures through interaction with people. Results demonstrate the flexibility of our method to successfully integrate multiple sensory inputs, for accurate learning and recognition. Thus, our integrated probabilistic framework offers a promising core technology for robust intelligent systems, which are able to perceive, learn and interact with people and their environments.
From neuroscience, brain imaging and the psychology of memory, we are beginning to assemble an integrated theory of the brain subsystems and pathways that allow the compression, storage and reconstruction of memories for past events and their use in contextualizing the present and reasoning about the future—mental time travel (MTT). Using computational models, embedded in humanoid robots, we are seeking to test the sufficiency of this theoretical account and to evaluate the usefulness of brain-inspired memory systems for social robots. In this contribution, we describe the use of machine learning techniques—Gaussian process latent variable models—to build a multimodal memory system for the iCub humanoid robot and summarize results of the deployment of this system for human–robot interaction. We also outline the further steps required to create a more complete robotic implementation of human-like autobiographical memory and MTT. We propose that generative memory models, such as those that form the core of our robot memory system, can provide a solution to the symbol grounding problem in embodied artificial intelligence. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
Synthetic psychology describes the approach of "understanding through building" applied to the human condition. In this chapter, we consider the specific challenge of synthesizing a robot "sense of self". Our starting hypothesis is that the human self is brought into being by the activity of a set of transient self-processes instantiated by the brain and body. We propose that we can synthesize a robot self by developing equivalent subsystems within an integrated biomimetic cognitive architecture for a humanoid robot. We begin the chapter by motivating this work in the context of the criteria for recognizing other minds, and the challenge of benchmarking artificial intelligence against human, and conclude by describing efforts to create a sense of self for the iCub humanoid robot that has ecological, temporally-extended, interpersonal and narrative components set within a multi-layered model of mind. Alan Turing, one of the founders of computer science, once suggested that there were two paths to human-level Artificial Intelligence (AI)Ñone through emulating the more abstract abilities of the human mind, such as chess playing, the other, much closer to the spirit of this book, by providing a robot with Òthe best sense organs that money can buy, and then teach[ing] it to understand and speak English. This process could follow the normal teaching of a childÓ [66, p.460]. Turing was noncommittal about which approach would work best and suggested we try both. Two-thirds of a century after Turing, as different AIs battle between themselves to be the worldÕs best at chess [59], it is clear that the first approach has been spectacularly successful at producing some forms of machine intelligence, though not at emulating or approaching Ògeneral intelligenceÓÑthe
Generating complex, human-like behavior in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, and touch detection), object manipulation (basic and complex motor actions), and social interaction (speech synthesis and joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behavior and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarizing themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.