One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a "shared plan" -which defines the interlaced actions of the two cooperating agents -in real time, and even to negotiate this shared plan during its execution.In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the system's ability to allow a Nao humanoid robot to learn a non-trivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.Index Terms-cooperation, humanoid robot, spoken language interaction, shared plans, situated and social learning.
ReuseUnless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version -refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher's website. TakedownIf you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.Embodied hyperacuity from Bayesian perception: Shape and position discrimination with an iCub fingertip sensor Nathan F. Lepora, Uriel Martinez-Hernandez, Hector Barron-Gonzalez, Mat Evans, Giorgio Metta, Tony J. Prescott Abstract-Recent advances in modeling animal perception has motivated an approach of Bayesian perception applied to biomimetic robots. This study presents an initial application of Bayesian perception on an iCub fingertip sensor mounted on a dedicated positioning robot. We systematically probed the test system with five cylindrical stimuli offset by a range of positions relative to the fingertip. Testing the real-time speed and accuracy of shape and position discrimination, we achieved sub-millimeter accuracy with just a few taps. This result is apparently the first explicit demonstration of perceptual hyperacuity in robot touch, in that object positions are perceived more accurately than the taxel spacing. We also found substantial performance gains when the fingertip can reposition itself to avoid poor perceptual locations, which indicates that improved robot perception could mimic active perception in animals.
This work aims to augment the capacities for haptic perception in the iCub robot to generate a controller for surface exploration. The main task involves moving the hand over an irregular surface with uncertain slope, by concurrently regulating the pressure of the contact. Providing this ability will enable the autonomous extraction of important haptic features, such as texture and shape. We propose a hand controller whose operational space is deÞned over the surface of contact. The surface is estimated using a robust probabilistic estimator, which is then used for path planning. The motor commands are generated using a feedback controller, taking advantage of the kinematic information available by proprioception. Finally, the effectiveness of this controller is extended using a cerebellarlike adapter that generates reliable pressure tracking over the Þnger and results in a trajectory with less vulnerability to perturbations. The results of this work are consistent with insights about the role of the cerebellum on haptic perception in humans.
Recent advances in visual SLAM have focused on improving estimation of sparse 3D points or patches that represent parts of surroundings. In order to establish an adequate scene understanding, inference of spatial relations among landmarks must be part of the SLAM processing. A novel RaoBlackwilized PF-SLAM algorithm is proposed to utilize geometric relations of landmarks with respect to high level features, such as planes, for improving estimation. These geometric relations are defined as a set of geometric constraint hypotheses inferred during the mapping task. In each prediction-update cycle of estimation, probabilistic constraints are created and applied to update landmarks based on a hierarchical inference process. Based on experiments, improvement over estimation and completeness of the scene description is achieved using the proposal of this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.