Abstract-RRT planning is a well developed paradigm for solving motion planning problems where the search occurs directly in the configuration space of the object to be moved. In manipulation however, the object to be moved is indirectly controlled by contact with a robot manipulator. This is a different problem and one that must be solved if RRT planning is to be applied to robot manipulation of objects. The simplest version of this problem is one in which the robot has a single finger. The planning problem is thus to search for a sequence of pushes that will move the object from a start to a goal configuration. Our main innovation is to split the planning problem into a RRT planner operating in the configuration space of the object, and a local planner that generates sequences of actions in the joint space that will move the object between a pair of nodes in the RRT. We show that this two level strategy enables us to find successful pushing plans. We also show the effect of varying the number of pushing actions that are randomly selected at each stage.
As robot make their way out of factories into human environments, outer space, and beyond, they require the skill to manipulate their environment in multifarious, unforeseeable circumstances. With this regard, pushing is an essential motion primitive that dramatically extends a robot's manipulation repertoire. In this work, we review the robotic pushing literature. While focusing on work concerned with predicting the motion of pushed objects, we also cover relevant applications of pushing for planning and control. Beginning with analytical approaches, under which we also subsume physics engines, we then proceed to discuss work on learning models from data. In doing so, we dedicate a separate section to deep learning approaches which have seen a recent upsurge in the literature. Concluding remarks and further research perspectives are given at the end of the paper. Stüber et al. Let's Push Things Forward PROBLEM STATEMENTEven in ideal conditions, such as structured environments where an agent has a complete model of the environment and perfect sensing abilities, the problems of robotic grasping and manipulation are not trivial. By complete model of the environment we mean that physical and geometric properties of the world, such as pose, shape, friction parameters and the mass of the object we wish to manipulate, are exactly known. In fact, the object to be manipulated is indirectly controlled by contacts with a robot manipulator (e.g. pushing by a contacting finger part), and an inverse model (IM), which computes an action to produce the desired motion or set of forces on the object, may not be known. Sometimes forward models (FM) may be fully or partially known, even where IMs are not available. In such cases, an FM can be used to estimate the next state of a system, given the current state and a set of executable actions. This enables planning to be 4. Physics engines. It employs a physics engine as a "black box" to make predictions about the interactions.5. Data-driven. It learns how to predict physical interaction from examples. 6. Deep learning. As the data-driven approaches, it learns how to construct an FM from examples. The key insight is that the deep learning approaches are based on feature extraction.The features highlighted for each approach are as follows.• The assumptions made by the authors on their approach. We highlight i) the quasi-static assumption in the model, ii) if it is a seminal work on 2D shapes, and iii) if the method required a known model of the object to be manipulated.• The type of motion analysed in the paper, such as 1D, planar (2D translation and 1D rotation around the x−axis), or full 3D (3D translation and 3D rotation).• The aim of the paper. We distinguish between predicting the motion of the object, estimating physical parameters, planning pushes, and analysing a push to reach a stable grasp.
This paper presents a data-efficient approach to learning transferable forward models for robotic push manipulation. Our approach extends our previous work on contactbased predictors by leveraging information on the pushed object's local surface features. We test the hypothesis that, by conditioning predictions on local surface features, we can achieve generalisation across objects of different shapes. In doing so, we do not require a CAD model of the object but rather rely on a point cloud object model (PCOM). Our approach involves learning motion models that are specific to contact models. Contact models encode the contacts seen during training time and allow generating similar contacts at prediction time. Predicting on familiar ground reduces the motion models' sample complexity while using local contact information for prediction increases their transferability. In extensive experiments in simulation, our approach is capable of transfer learning for various test objects, outperforming a baseline predictor. We support those results with a proof of concept on a real robot.United Kingdom
Abstract-Dexterous grasping of objects with uncertain pose is a hard unsolved problem in robotics. This paper solves this problem using information gain re-planning. First we show how tactile information, acquired during a failed attempt to grasp an object can be used to refine the estimate of that object's pose. Second, we show how this information can be used to replan new reach to grasp trajectories for successive grasp attempts. Finally we show how reach-to-grasp trajectories can be modified, so that they maximise the expected tactile information gain, while simultaneously delivering the hand to the grasp configuration that is most likely to succeed. Our main novel outcome is thus to enable tactile information gain planning for Dexterous, high degree of freedom (DoFs) manipulators. We achieve this using a combination of information gain planning, hierarchical probabilistic roadmap planning, and belief updating from tactile sensors for objects with nonGaussian pose uncertainty in 6 dimensions. The method is demonstrated in trials with simulated robots. Sequential replanning is shown to achieve a greater success rate than single grasp attempts, and trajectories that maximise information gain require fewer re-planning iterations than conventional planning methods before a grasp is achieved.
Touch is an important modality to recover object shape. We present a method for a robot to complete a partial shape model by local tactile exploration. In local tactile exploration, the finger is constrained to follow the local surface. This is useful for recovering information about a contiguous portion of the object and is frequently employed by humans. There are three contributions. First, we show how to segment an initial point cloud of a grasped, unknown object into hand and object. Second, we present a local tactile exploration planner. This combines a Gaussian Process (GP) model of the object surface with an AtlasRRT planner. The GP predicts the unexplored surface and the uncertainty of that prediction. The AtlasRRT creates a tactile exploration path across this predicted surface, driving it towards the region of greatest uncertainty. Finally, we experimentally compare the planner with alternatives in simulation, and demonstrate the complete approach on a real robot. We show that our planner successfully traverses the object, and that the full object shape can be recovered with a good degree of accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.