This work presents a concept for autonomous mobile manipulation in industrial environments. Utilizing autonomy enables an unskilled human worker to easily configure a complex robotics system in a setup phase before carrying out fetch and carry operations in the execution phase. In order to perform the given tasks in real industrial production sites, we propose a robotic system consisting of a mobile platform, a torque-controlled manipulator, and an additional sensor head. Multiple sensors are attached which allow for perception of the environment and the objects to be manipulated. This is essential for coping with uncertainties in real-world application. In order to provide an easy-to-use and flexible system, we present a modular software concept which is handled and organized by a hierarchical flow control depending on the given task and environmental requirements. The presented concept for autonomous mobile manipulation is implemented exemplary for industrial manipulation tasks and proven by real-world application in a water pump production site. Furthermore, the concept has also been applied to other robotic systems and other domains for planetary exploration with a rover.
The procedure of manually generating a 3D model of an object is very time consuming for a human operator. Nextbest-view (NBV) planning is an important aspect for automation of this procedure in a robotic environment. We propose a surface-based NBV approach, which creates a triangle surface from a real-time data stream and determines viewpoints similar to human intuition. Thereby, the boundaries in the surface are detected and a quadratic patch for each boundary is estimated. Then several viewpoint candidates are calculated, which look perpendicular to the surface and overlap with previous sensor data. A NBV is selected with the goal to fill areas which are occluded. This approach focuses on the completion of a 3D model of an unknown object. Thereby, the search space for the viewpoints is not restricted to a cylinder or sphere. Our NBV determination proves to be very fast, and is evaluated in an experiment on test objects, applying an industrial robot and a laser range scanner.
We present a next-best-scan (NBS) planning approach for autonomous 3D modeling. The system successively completes a 3D model from complex shaped objects by iteratively selecting a NBS based on previously acquired data. For this purpose, new range data is accumulated in-theloop into a 3D surface (streaming reconstruction) and new continuous scan paths along the estimated surface trend are generated. Further, the space around the object is explored using a probabilistic exploration approach that considers sensor uncertainty. This allows for collision free path planning in order to completely scan unknown objects. For each scan path, the expected information gain is determined and the best path is selected as NBS. The presented NBS approach is tested with a laser striper system, attached to an industrial robot. The results are compared to state-of-the-art next-bestview methods. Our results show promising performance with respect to completeness, quality and scan time.
Active scene exploration incorporates object recognition methods for analyzing a scene of partially known objects and exploration approaches for autonomous modeling of unknown parts. In this work, recognition, exploration, and planning methods are extended and combined in a single scene exploration system, enabling advanced techniques such as multi-view recognition from planned view positions and iterative recognition by integration of new objects from a scene. Here, a geometry based approach is used for recognition, i.e. matching objects from a database. Unknown objects are autonomously modeled and added to the recognition database. Next-Best-View planning is performed both for recognition and modeling. Moreover, 3D measurements are merged in a Probabilistic Voxel Space, which is utilized for planning collision free paths, minimal occlusion views, and verifying the poses of the recognized objects against all previous information. Experiments on an industrial robot with attached 3D sensors are shown for scenes with household and industrial objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.