For the interaction of a mobile robot with a dynamic environment, the estimation of object motion is desired while the robot is walking and/or turning its head. In this paper, we describe a system which manages this task by combining depth from a stereo camera and computation of the camera movement from robot kinematics in order to stabilize the camera images. Moving objects are detected by applying optical flow to the stabilized images followed by a filtering method, which incorporates both prior knowledge about the accuracy of the measurement and the uncertainties of the measurement process itself. The efficiency of this system is demonstrated in a dynamic real-world scenario with a walking humanoid robot.
In this paper we report the results of our research on learning and developing cognitive systems. The results are integrated into ALIS 3, our Autonomous Learning and Interacting System version 3 realized the humanoid robot ASIMO. The results presented address crucial issues in autonomously acquiring mental concepts in artifacts. The major contributions are the following: We researched distributed learning in various modalities in which the local learning decisions mutually support each other. Associations between the different modalities (speech, vision, behavior) are learnt online, thus addressing the issue of grounding semantics. The data from the different modalities is uniformly represented in a hybrid data representation for global decisions and local novelty detection. On the behavior generation side proximity sensor driven reflexive grasping and releasing have been integrated with a planning approach based on whole body motion control. The feasibility of the chosen approach is demonstrated in interactive experiments with the integrated system. The system interactively learns visually defined classes like "left", "right", "up", "down", "large", "small", learns corresponding auditory labels and creates associations linking the auditory labels to the visually defined classes or basic behaviors for building internal concepts.
Abstract-We introduce our latest autonomous learning and interaction system instance ALIS 2. It comprises different sensing modalities for visual (depth blobs, planar surfaces, motion) and auditory (speech, localization) signals and selfcollision free behavior generation on the robot ASIMO. The system design emphasizes the split into a completely autonomous reactive layer and an expectation generation layer. Different feature channels can be classified and named with arbitrary speech labels in on-line learning sessions. The feasibility of the proposed approach is shown by interaction experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.