Articulated and flexible objects constitute a challenge for robot manipulation tasks, but are present in different real-world settings, including home and industrial environments. Current approaches to the manipulation of articulated and flexible objects employ ad hoc strategies to sequence and perform actions on them depending on a number of physical or geometrical characteristics related to those objects, as well as on an a priori classification of target object configurations.In this paper, we propose an action planning and execution framework, which (i) considers abstract representations of articulated or flexible objects, (ii) integrates action planning to reason upon such configurations and to sequence an appropriate set of actions with the aim of obtaining a target configuration provided as a goal, and (iii) is able to cooperate with humans to collaboratively carry out the plan.On the one hand, we show that a trade-off exists between the way articulated or flexible objects are perceived and how the system represents them. Such a trade-off greatly impacts on the complexity of the planning process. On the other hand, we demonstrate the system's capabilities in allowing humans to interrupt robot action execution, and -in general -to contribute to the whole manipulation process.Results related to planning performance are discussed, and examples with a Baxter dual-arm manipulator performing actions collaboratively with humans are shown.
The problem of autonomous transportation in industrial scenarios is receiving a renewed interest due to the way it can revolutionise internal logistics, especially in unstructured environments. This paper presents a novel architecture allowing a robot to detect, localise, and track (possibly multiple) pallets using machine learning techniques based on an on-board 2D laser rangefinder only. The architecture is composed of two main components: the first stage is a pallet detector employing a Faster Region-based Convolutional Neural Network (Faster R-CNN) detector cascaded with a CNN-based classifier; the second stage is a Kalman filter for localising and tracking detected pallets, which we also use to defer commitment to a pallet detected in the first stage until sufficient confidence has been acquired via a sequential data acquisition process. For fine-tuning the CNNs, the architecture has been systematically evaluated using a real-world dataset containing 340 labeled 2DThe research leading to these results has received funding from the POR/FESR Liguria regional funding scheme, under grant agreement n. 56 (AIRONE).
The goal-oriented manipulation of articulated objects plays an important role in real-world robot tasks. Current approaches typically pose a number of simplifying assumptions to reason upon how to obtain an articulated object's goal configuration, and exploit ad hoc algorithms. The consequence is twofold: firstly, it is difficult to generalise obtained solutions (in terms of actions a robot can execute) to different target object's configurations and, in a broad sense, to different object's physical characteristics; secondly, the representation and the reasoning layers are tightly coupled and interdependent. In this paper we investigate the use of automated planning techniques for dealing with articulated objects manipulation tasks. Such techniques allow for a clear separation between knowledge and reasoning, as advocated in Knowledge Engineering. We introduce two PDDL formulations of the task, which rely on conceptually different representations of the orientation of the objects. Experiments involving several planners and increasing size objects demonstrate the effectiveness of the proposed models, and confirm its exploitability when embedded in a real-world robot software architecture.
The manipulation of articulated objects plays an important role in real-world robot tasks, both in home and industrial environments. A lot of attention has been devoted to the development of ad hoc approaches and algorithms for generating the sequence of movements the robot has to perform in order to manipulate the object. Such approaches can hardly generalise on different settings, and are usually focused on 2D manipulations. In this paper we introduce a set of PDDL+ formulations for performing automated manipulation of articulated objects in a three-dimensional workspace by a dual-arm robot. Presented formulations differ in terms of how gravity is modelled, considering different trade-offs between modelling accuracy and planning performance, and between human-readability and parsability by planners. Our experimental analysis compares the formulations on a range of domain-independent planners, that aim at generating plans for allowing a dual-arm robot to manipulate articulated objects of different sizes. Validation is performed in simulation on a Baxter robot.
The manipulation of articulated objects is of primary importance in Robotics and can be considered as one of the most complex manipulation tasks. Traditionally, this problem has been tackled by developing ad hoc approaches, which lack flexibility and portability. In this paper, we present a framework based on answer set programming (ASP) for the automated manipulation of articulated objects in a robot control architecture. In particular, ASP is employed for representing the configuration of the articulated object for checking the consistency of such representation in the knowledge base and for generating the sequence of manipulation actions. The framework is exemplified and validated on the Baxter dual-arm manipulator in the first, simple scenario. Then, we extend such scenario to improve the overall setup accuracy and to introduce a few constraints in robot actions execution to enforce their feasibility. The extended scenario entails a high number of possible actions that can be fruitfully combined together. Therefore, we exploit macro actions from automated planning in order to provide more effective plans. We validate the overall framework in the extended scenario, thereby confirming the applicability of ASP also in more realistic Robotics settings and showing the usefulness of macro actions for the robot-based manipulation of articulated objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.