Reducing the cumulative error is a crucial task in simultaneous localization and mapping (SLAM). Usually, Loop Closure Detection (LCD) is exploited to accomplish this work for SLAM and robot navigation. With a fast and accurate loop detection, it can significantly improve global localization stability and reduce mapping errors. However, the LCD task based on point cloud still has some problems, such as over-reliance on high-resolution sensors, and poor detection efficiency and accuracy. Therefore, in this paper, we propose a novel and fast global LCD method using a low-cost 16 beam Lidar based on “Simplified Structure”. Firstly, we extract the “Simplified Structure” from the indoor point cloud, classify them into two levels, and manage the “Simplified Structure” hierarchically according to its structure salience. The “Simplified Structure” has simple feature geometry and can be exploited to capture the indoor stable structures. Secondly, we analyze the point cloud registration suitability with a pre-match, and present a hierarchical matching strategy with multiple geometric constraints in Euclidean Space to match two scans. Finally, we construct a multi-state loop evaluation model for a multi-level structure to determine whether the two candidate scans are a loop. In fact, our method also provides a transformation for point cloud registration with “Simplified Structure” when a loop is detected successfully. Experiments are carried out on three types of indoor environment. A 16 beam Lidar is used to collect data. The experimental results demonstrate that our method can detect global loop closures efficiently and accurately. The average global LCD precision, accuracy and negative are approximately 0.90, 0.96, and 0.97, respectively.
ABSTRACT:Real-time indoor localization based on supporting infrastructures like wireless devices and QR codes are usually costly and labor intensive to implement. In this study, we explored a cheap alternative approach based on images for indoor localization. A user can localize him/herself by just shooting a photo of the surrounding indoor environment using the mobile phone. No any other equipment is required. This is achieved by employing image-matching and searching techniques with a dataset of pre-captured indoor images. In the beginning, a database of structured images of the indoor environment is constructed by using image matching and the bundle adjustment algorithm. Then each image's relative pose (its position and orientation) is estimated and the semantic locations of images are tagged. A user's location can then be determined by comparing a photo taken by the mobile phone to the database. This is done by combining quick image searching, matching and the relative orientation. This study also try to explore image acquisition plans and the processing capacity of off-the-shell mobile phones. During the whole pipeline, a collection of indoor images with both rich and poor textures are examined. Several feature detectors are used and compared. Pre-processing of complex indoor photo is also implemented on the mobile phone. The preliminary experimental results prove the feasibility of this method. In the future, we are trying to raise the efficiency of matching between indoor images and explore the fast 4G wireless communication to ensure the speed and accuracy of the localization based on a client-server framework.
The rapid detection and fine pose estimation of textureless objects in red-green-blue and depth (RGB-D) images are challenging tasks, especially for small dark industrial parts on the production line in clutter scenes. In this paper, a novel practical method based on an RGB-D sensor, which includes 3D object segmentation and 6D pose estimation, is proposed. At the 3D object segmentation stage, 3D virtual and detected bounding boxes are combined to segment 3D scene point clouds. The 3D virtual bounding boxes are determined from prior information on the parts and charging tray, and the 3D detected bounding boxes are obtained from the 2D detected bounding boxes in part detection based on a Single Shot MultiBox Detector (SSD) network in an RGB image. At the 6D pose estimation stage, the coarse pose is estimated by fitting the central axis of the part from the observed 3D point clouds accompanied by a lot of noise, and then refined with part model point clouds by using the iterative closest point (ICP) algorithm. The proposed method has been successfully applied to robotic grasping on the industrial production line with a customer-leverldepth camera. The results verified that grasping speed reaches the subsecond level and that grasping accuracy reaches the millimeter level. The stability and robustness of the automation system meet the production requirement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.