We present the design, integration, and evaluation of a full-stack robotic system called RoMan, which can conduct autonomous field operations involving physical interaction with its environment. RoMan offers autonomous behaviors that can be triggered from succinct, high-level human input such as “open this box and retrieve the bag inside.” The robot’s behaviors are driven by a set of planners and controllers grounded in perceptual reconstructions of the environment. These behaviors are articulated by a behavior tree that translates high-level operator input into programs of increasing sensorimotor expressiveness, ultimately driving the lowest-level controllers. The software system is implemented in ROS as a set of independent processes connected by synchronous and asynchronous communication, and distributed across two on-board planning/control computers. The behavior stack drives a novel platform consisting of a pair of custom, 500 Nm/axis manipulators mounted on a rotatable torso aboard a tracked platform. The robot’s head is equipped with forward-looking depth cameras, and the arms carry wrist-mounted force-torque sensors and a mix of three- and four-finger grippers. We discuss design and implementation trade-offs affecting the entire hardware-software stack and high-level manipulation behaviors. We also demonstrate the applicability of the system for solving two manipulation tasks: 1) removing heavy debris from a roadway, where 64% of end-to-end autonomous runs required at most one human intervention; and 2) retrieving an item from a closed container, with a fully autonomous success rate of 56%. Finally, we indicate lessons learned and suggest outstanding research problems.
We present a collection of sensorimotor models which, when paired with a custom, mobile manipulation platform, collectively enable the autonomous deconstruction of piles of debris to facilitate mobility in cluttered spaces. The models address the problems of visual debris segmentation, object selection, and visual grasp planning. They also exercise proprioceptive grasp control, and force-controlled object extraction to enact the grasping plan. The object selector segments a debris pile into a set of parts that appear disjoint from one another and executes a rule-based decision program to select the object that appears easiest to extract. Then a grasp planner identifies a grasping point and hand preshape at a location where the gripper configuration fits the shape of the object, while also satisfying kinematic and collision-avoidance constraints. Our geometric grasp-prototype concept allows the planner to establish grasp suitability by fitting a set of shape-grasp primitives to a 2.5D depth image of the pile. The robot then applies the grasp reactively, pushing against the object until a preset resistance is met. Finally, an admittance controller guides object extraction, allowing a prescribed end-effector compliance along task-frame axes to minimize adverse forces and torques on the hand. We show experiments demonstrating the applicability of those models in isolation and in concert, including grasp tests conducted on objects representative of of a human-scale urban environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.