Manipulation of deformable objects has given rise to an important set of open problems in the field of robotics. Application areas include robotic surgery, household robotics, manufacturing, logistics, and agriculture, to name a few. Related research problems span modeling and estimation of an object's shape, estimation of an object's material properties, such as elasticity and plasticity, object tracking and state estimation during manipulation, and manipulation planning and control. In this survey article, we start by providing a tutorial on foundational aspects of models of shape and shape dynamics. We then use this as the basis for a review of existing work on learning and estimation of these models and on motion planning and control to achieve desired deformations. We also discuss potential future lines of work.
In this paper we present a multi-modal framework for offline learning of generative models of object deformation under robotic pushing. The model is multi-modal in that it is based on integrating force and visual information. The framework consists of several sub-models that are independently calibrated from the same data. These component models can be sequenced to provide many-step prediction and classification. When presented with a test example-a robot finger pushing a deformable object made of an unidentified, but previously learned, material-the predictions of modules for different materials are compared so as to classify the unknown material. Our approach, which consists of offline learning and combination of multiple models, goes beyond previous techniques by enabling i) predictions over many steps, ii) learning of plastic and elastic deformation from real data, iii) prediction of forces experienced by the robot, iv) classification of materials from both force and visual data, v) prediction of object behaviour after contact by the robot terminates. While previous work on deformable object behaviour in robotics has offered one or two of these features none has offered a way to achieve them all, and none has offered classification from a generative model. We do so through separately learned models which can be combined in different ways for different purposes.
Imagine a situation in which you had to design a physical agent that could collect information from its environment, then store and process that information to help it respond appropriately to novel situations. What kinds of information should it attend to? How should the information be represented so as to allow efficient use and re-use? What kinds of constraints and trade-offs would there be? There are no unique answers. In this paper, we discuss some of the ways in which the need to be able to address problems of varying kinds and complexity can be met by different information processing systems. We also discuss different ways in which relevant information can be obtained, and how different kinds of information can be processed and used, by both biological organisms and artificial agents. We analyse several constraints and design features, and show how they relate both to biological organisms, and to lessons that can be learned from building artificial systems. Our standpoint overlaps with Karmiloff-Smith (1992) in that we assume that a collection of mechanisms geared to learning and developing in biological environments are available in forms that constrain, but do not determine, what can or will be learnt by individuals.
Abstract. Faced with a vast, dynamic environment, some animals and robots often need to acquire and segregate information about objects. The form of their internal representation depends on how the information is utilised. Sometimes it should be compressed and abstracted from the original, often complex, sensory information, so it can be efficiently stored and manipulated, for deriving interpretations, causal relationships, functions or affordances. We discuss how salient features of objects can be used to generate compact representations, later allowing for relatively accurate reconstructions and reasoning. Particular moments in the course of an object-related process can be selected and stored as 'key frames'. Specifically, we consider the problem of representing and reasoning about a deformable object from the viewpoint of both an artificial and a natural agent.
Deformable objects abound in nature, and future robots must be able to predict how they are going to behave in order to control them. In this paper we present a method capable of learning to predict the behaviour of deformable objects. We use a mass-spring-like model, which we extended to better suit our purposes, and apply it to the concrete scenario of robotic manipulation of an elastic deformable object. We describe a procedure for automatically calibrating the parameters for the model taking images and forces from a real sponge as ground truth. We use this ground truth to provide error measures that drive an evolutionary process that searches the parameter space of the model. The resulting calibrated model can make good predictions for 200 frames (6.667 seconds of real time video) even when tested with forces being applied in different positions to those trained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.