Abstract-Robust robotic manipulation and perception remains a difficult challenge, in particular in unstructured environments. To address this challenge, we propose to couple manipulation and perception. The robot observes its own deliberate interactions with the world. These interactions reveal sensory information that would otherwise remain hidden and facilitate the interpretation of perceptual data. To demonstrate the effectiveness of interactive perception we present a skill for the manipulation of articulated objects. We show how UMan, our mobile manipulation platform, obtains a kinematic model of an unknown object. The model then enables the robot to perform purposeful manipulation. Our algorithm is extremely robust, and does not require prior knowledge of the object; it is insensitive to lighting, texture, color, specularities, background, and is computationally highly efficient.
Abstract-We introduce a learning-based approach to manipulation in unstructured environments. This approach permits autonomous acquisition of manipulation expertise from interactions with the environment. The resulting expertise enables a robot to perform effective manipulation based on partial state information. The manipulation expertise is represented in a relational state representation and learned using relational reinforcement learning. The relational representation renders learning tractable by collapsing a large number of states onto a single, relational state. The relational state representation is carefully grounded in the perceptual and interaction skills of the robot. This ensures that symbolically learned knowledge remains meaningful in the physical world. We experimentally validate the proposed learning approach on the task of manipulating an articulated object to obtain a model of its kinematic structure. Our experiments demonstrate that the manipulation expertise acquired by the robot leads to substantial performance improvements. These improvements are maintained when experience is applied to previously unseen objects.
We present an interactive perceptual skill for segmenting, tracking, and kinematic modeling of 3D articulated objects. This skill is a prerequisite for general manipulation in unstructured environments. Robot-environment interaction is used to move an unknown object, creating a perceptual signal that reveals the kinematic properties of the object. The resulting perceptual information can then inform and facilitate further manipulation. The algorithm is computationally efficient, handles occlusion, and depends on little object motion; it only requires sufficient texture for visual feature tracking. We conducted experiments with everyday objects on a mobile manipulation platform equipped with an RGB-D sensor. The results demonstrate the robustness of the proposed method to lighting conditions, object appearance, size, structure, and configuration.
Abstract-Autonomous manipulation in unstructured environments presents roboticists with three fundamental challenges: object segmentation, action selection, and motion generation. These challenges become more pronounced when unknown manmade or natural objects are cluttered together in a pile. We present an end-to-end approach to the problem of manipulating unknown objects in a pile, with the objective of removing all objects from the pile and placing them into a bin. Our robot perceives the environment with an RGB-D sensor, segments the pile into objects using non-parametric surface models, computes the affordances of each object, and selects the best affordance and its associated action to execute. Then, our robot instantiates the proper compliant motion primitive to safely execute the desired action. For efficient and reliable action selection, we developed a framework for supervised learning of manipulation expertise. We conducted dozens of trials and report on several hours of experiments involving more than 1500 interactions. The results show that our learning-based approach for pile manipulation outperforms a common sense heuristic as well as a random strategy, and is on par with human action selection.
We present a skill for the perception of three-dimensional kinematic structures of rigid articulated bodies with revolute and prismatic joints. The ability to acquire such models autonomously is required for general manipulation in unstructured environments. Experiments on a mobile manipulation platform with realworld objects under varying lighting conditions demonstrate the robustness of the proposed method. This robustness is achieved by integrating perception and manipulation capabilities: the manipulator interacts with the environment to move an unknown object, thereby creating a perceptual signal that reveals the kinematic properties of the object. For good performance, the perceptual skill requires the presence of trackable visual features in the scene.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.