Survey-grade Lidar brands have commercialized Lidar-based mobile mapping systems (MMSs) for several years now. With this high-end equipment, the high-level accuracy quality of point clouds can be ensured, but unfortunately, their high cost has prevented practical implementation in autonomous driving from being affordable. As an attempt to solve this problem, we present a cost-effective MMS to generate an accurate 3D color point cloud for autonomous vehicles. Among the major processes for color point cloud reconstruction, we first synchronize the timestamps of each sensor. The calibration process between camera and Lidar is developed to obtain the translation and rotation matrices, based on which color attributes can be composed into the corresponding Lidar points. We also employ control points to adjust the point cloud for fine tuning the absolute position. To overcome the limitation of Global Navigation Satellite System/Inertial Measurement Unit (GNSS/IMU) positioning system, we utilize Normal Distribution Transform (NDT) localization to refine the trajectory to solve the multi-scan dispersion issue. Experimental results show that the color point cloud reconstructed by the proposed MMS has a position error in centimeter-level accuracy, meeting the requirement of high definition (HD) maps for autonomous driving usage.
Linear motors (LMs) are widely used in numerous industry automation where precise and fast motions are required to convert electric energy into linear actuation without the need of any switching mechanism. This study aims to develop a control strategy of auto-tuning cross-coupled two-degree-of-freedom proportional-integral-derivative (ACC2PID) to achieve extremely high-precision contour control of a LMs-driven X-Y-Y stage. Three 2PID controllers are developed to control the mover positions in individual axes while two compensators are designed to eliminate the contour errors in biaxial motions. Furthermore, an improved artificial bee colony algorithm is employed as a powerful optimization technique so that all the control parameters can be concurrently evaluated and optimized online while ensuring the non-fragility of the proposed controller. In this way, the tracking error in each axis and contour errors of the biaxial motions can be concurrently minimized, and further, satisfactory positioning accuracy and synchronization performance can be achieved. Finally, the experimental comparison results confirm the validity of the proposed ACC2PID control system regarding the multi-axis contour tracking control of the highly uncertain and nonlinear LMs-driven X-Y-Y stage.
Purpose FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms. Design/methodology/approach As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates. Findings Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy. Originality/value The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.