Abstract-A robot designer can provide a robot with knowledge to perform tasks on an environment. However, this approach can limit the achievement of future tasks executed by the robot. Providing it with the ability to develop its own skills paves the way for robots that are not limited by design. In this work a task consists in reproducing a given set of effects on an object. A robot must accomplish this task with limited information about the object, learning affordances to reproduce the effects, increasing this information throughout consecutive interactions with the object. We propose a method named Adaptive Affordance Learning (A 2 L) which endows a robot with the capacity to learn affordances associated to an object, both adapting the robot's actions to the object position, and increasing the robot's information about the object when needed. This paper presents two main contributions: first, an online adaption of the robot actions to interact with the object, decomposing each action into a sequence of movements, adapting each movement, in a close loop, to the object position; and second, to increase the information about the object, we propose an iterative process that alternates between (1) exploration of the environment interacting with the object, (2) affordance acquisition and (3) affordance validation. These contributions are assessed in two experiments where a simulated Baxter robot learns to push a box to different positions on a table.
Human's everyday environment is an open environment in which objects with new shapes, colors or textures frequently appear. Enabling robots to deal with such environments and to manipulate those objects raises a difficult challenge: how to recognize an object ? How to distinguish it from the background ? An approach is proposed here to allow the robot to find this segmentation on its own. It relies on an active exploration of the environment aimed at identifying features of things that move after a contact with robot's end-effector. The only assumption made is that objects of interest are solid objects that the robot can move. The proposed approach can thus be applied without modifications to a large range of environments, as shown by the experiments performed by the robot.
Our daily environments are complex, composed of objects with different features. These features can be categorized into low-level features, e.g., an object position or temperature, and high-level features resulting from a pre-processing of low-level features for decision purposes, e.g., a binary value saying if it is too hot to be grasped. Besides, our environments are dynamic, i.e., object states can change at any moment. Therefore, robots performing tasks in these environments must have the capacity to (i) identify the next action to execute based on the available low-level and high-level object states, and (ii) dynamically adapt their actions to state changes. We introduce a method named Interaction State-based Skill Learning (IS 2 L), which builds skills to solve tasks in realistic environments. A skill is a Bayesian Network that infers actions composed of a sequence of movements of the robot's end-effector, which locally adapt to spatio-temporal perturbations using a dynamical system. In the current paper, an external agent performs one or more kinesthetic demonstrations of an action generating a dataset of high-level and low-level states of the robot and the environment objects. First, the method transforms each interaction to represent (i) the relationship between the robot and the object and (ii) the next robot end-effector movement to perform at consecutive instants of time. Then, the skill is built, i.e., the Bayesian network is learned. While generating an action this skill relies on the robot and object states to infer the next movement to execute. This movement selection gets inspired by a type of predictive models for action selection usually called affordances. The main contribution of this paper is combining the main features of dynamical systems and affordances in a unique method to build skills that solve tasks in realistic scenarios. More precisely, combining the low-level movement generation of the dynamical systems, to adapt to local perturbations, with the next movement selection simultaneously based on high-level and low-level states. This contribution was assessed in three experiments in realistic environments using both high-level and low-level states. The built skills solved the respective tasks relying on both types of states, and adapting to external perturbations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.