In this paper we report the results of our research on learning and developing cognitive systems. The results are integrated into ALIS 3, our Autonomous Learning and Interacting System version 3 realized the humanoid robot ASIMO. The results presented address crucial issues in autonomously acquiring mental concepts in artifacts. The major contributions are the following: We researched distributed learning in various modalities in which the local learning decisions mutually support each other. Associations between the different modalities (speech, vision, behavior) are learnt online, thus addressing the issue of grounding semantics. The data from the different modalities is uniformly represented in a hybrid data representation for global decisions and local novelty detection. On the behavior generation side proximity sensor driven reflexive grasping and releasing have been integrated with a planning approach based on whole body motion control. The feasibility of the chosen approach is demonstrated in interactive experiments with the integrated system. The system interactively learns visually defined classes like "left", "right", "up", "down", "large", "small", learns corresponding auditory labels and creates associations linking the auditory labels to the visually defined classes or basic behaviors for building internal concepts.
We describe a system for visual interaction developed for humanoid robots. It enables the robot to interact with its environment using a smooth whole body motion control driven by stabilized visual targets. Targets are defined as visually extracted "proto-objects" and behavior-relevant object hypotheses and are stabilized by means of a short-term sensory memory. Selection mechanisms are used to switch between behavior alternatives for searching or tracking objects as well as different whole body motion strategies for reaching. The decision between different motion strategies like reaching with right or left hand or with and without walking is made based on internal predictions that use copies of the whole-body control algorithm. The results show robust object tracking and a smooth interaction behavior that includes a large variety of whole-body postures.
Abstract-We introduce our latest autonomous learning and interaction system instance ALIS 2. It comprises different sensing modalities for visual (depth blobs, planar surfaces, motion) and auditory (speech, localization) signals and selfcollision free behavior generation on the robot ASIMO. The system design emphasizes the split into a completely autonomous reactive layer and an expectation generation layer. Different feature channels can be classified and named with arbitrary speech labels in on-line learning sessions. The feasibility of the proposed approach is shown by interaction experiments.
Abstract-A stable perception of the environment is a crucial prerequisite for researching the learning of semantics from human-robot interaction and also for the generation of behavior relying on the robots perception. In this paper, we propose several contributions to this research field. To organize visual perception the concept of proto-objects is used for the representation of scene elements. These proto-objects are created by several different sources and can be combined to provide the means for interactive autonomous behavior generation. They are also processed by several classifiers, extracting different visual properties. The robot learns to associate speech labels with these properties by using the outcome of the classifiers for online training of a speech recognition system. To ease the combination of visual and speech classifier outputs, a necessity for the online training and basis for future learning of semantics, a common representation for all classifier results is used. This uniform handling of multimodal information provides the necessary flexibility for further extension. We will show the feasibility of the proposed approach by interactive experiments with the humanoid robot ASIMO.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.