The concept of affordances was introduced by J. J. Gibson to explain how inherent "values" and "meanings" of things in the environment can be directly perceived and how this information can be linked to the action possibilities offered to the organism by the environment. Although introduced in psychology, the concept influenced studies in other fields ranging from human-computer interaction to autonomous robotics. In this article, we first introduce the concept of affordances as conceived by J. J. Gibson and review the use of the term in different fields, with particular emphasis on its use in autonomous robotics. Then, we summarize four of the major formalization proposals for the affordance term. We point out that there are three, not one, perspectives from which to view affordances and that much of the confusion regarding discussions on the concept has arisen from this. We propose a new formalism for affordances and discuss its implications for autonomous robot control. We report preliminary results obtained with robots and link them with these implications.
Abstract-We add to a manipulator's capabilities a new primitive motion which we term a push-grasp. While significant progress has been made in robotic grasping of objects and geometric path planning for manipulation, such work treats the world and the object being grasped as immovable, often declaring failure when simple motions of the object could produce success. We analyze the mechanics of push-grasping and present a quasi-static tool that can be used both for analysis and simulation. We utilize this analysis to derive a fast, feasible motion planning algorithm that produces stable pushgrasp plans for dexterous hands in the presence of object pose uncertainty and high clutter. We demonstrate our algorithm extensively in simulation and on HERB, a personal robotics platform developed at Intel Labs Pittsburgh.
Abstract-Humans use a remarkable set of strategies to manipulate objects in clutter. We pick up, push, slide, and sweep with our hands and arms to rearrange clutter surrounding our primary task. But our robots treat the world like the Tower of Hanoi -moving with pick-and-place actions and fearful to interact with it with anything but rigid grasps. This produces inefficient plans and is often inapplicable with heavy, large, or otherwise ungraspable objects. We introduce a framework for planning in clutter that uses a library of actions inspired by human strategies. The action library is derived analytically from the mechanics of pushing and is provably conservative. The framework reduces the problem to one of combinatorial search, and demonstrates planning times on the order of seconds. With the extra functionality, our planner succeeds where traditional grasp planners fail, and works under high uncertainty by utilizing the funneling effect of pushing. We demonstrate our results with experiments in simulation and on HERB, a robotic platform developed at the Personal Robotics Lab at Carnegie Mellon University.
Abstract-We propose a planning method for grasping in cluttered environments, a method where the robot can make simultaneous contact with multiple objects. With this method, the robot reaches for and grasps the target while simultaneously contacting and moving aside objects to clear a desired path. We use a physics-based analysis of pushing to compute the motion of each object in the scene in response to a set of possible robot motions. Our method enables multiple robotobject interactions, interactions that can be pre-computed and cached. However, our method avoids object-object interactions to make the problem computationally tractable. Through tests on large sets of simulated scenes, we show that our planner produces more successful grasps in more complex scenes than versions that avoid any interaction with surrounding clutter. We validate our method on a real robot, a PR2, and show that it accurately predicts the outcome of a grasp. We also show that our approach, in conjunction with state-of-the-art object recognition tools, is applicable in real-life scenes that are highly cluttered and constrained.
Abstract-We investigate the problem of estimating the state of an object during manipulation. Contact sensors provide valuable information about the object state during actions which involve persistent contact, e.g. pushing. However, contact sensing is very discriminative by nature, and therefore the set of object states which contact a sensor constitutes a lowerdimensional manifold in the state space of the object. This causes stochastic state estimation methods such as particle filters to perform poorly when contact sensors are used. We propose a new algorithm, the manifold particle filter, which uses dual particles directly sampled from the contact manifold to avoid this problem. The algorithm adapts to the probability of contact by dynamically changing the number of dual particles sampled from the manifold. We compare our algorithm to the particle filter through extensive experiments and we show that our algorithm is both faster and better at estimating the state. Our algorithm's performance improves with increasing sensor accuracy and the filter's update rate. We implement the algorithm on a real robot using a force/torque sensor and strain gauges to track the pose of a pushed object.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.