This paper describes the behavior of a commercial light detection and ranging (LiDAR) sensor in the presence of dust. This work is motivated by the need to develop perception systems that must operate where dust is present. This paper shows that the behavior of measurements from the sensor is systematic and predictable. LiDAR sensors exhibit four behaviors that are articulated and understood from the perspective of the shapeof-return signals from emitted light pulses. We subject the commercial sensor to a series of tests that measure the return pulses and show that they are consistent with theoretical predictions of behavior. Several important conclusions emerge: (i) where LiDAR measures dust, it does so to the leading edge of a dust cloud rather than as a random noise; (ii) dust starts to affect measurements when the atmospheric transmittance is less than 71%-74%, but this is quite variable with conditions; (iii) LiDAR is capable of ranging to a target in dust clouds with transmittance as low as 2% if the target is retroreflective and 6% if it is of low reflectivity; (iv) the effects of airborne particulates such as dust are less evident in the far field. The significance of this paper lies in providing insight into how better to use measurements from off-the-shelf LiDAR sensors in solving perception problems. C 2017 Wiley Periodicals, Inc.
Correct registration of vehicle‐mounted sensors is a fundamental prerequisite for the perception capabilities required for automation. However, the use of a standard marker or artificial‐feature‐based approaches is often infeasible in environments that do not allow for additional infrastructure. This paper presents a method for sensor registration that overcomes this limitation by utilizing the geometric structure of the terrain surrounding the sensor platform. The method determines the information content of registration parameters in measurements of the terrain, and it updates only the subspace of those parameters with information. The performance of the method is demonstrated for registration of a sensor to a large mining haul‐truck and a swing‐loading excavator. The method is shown to successfully register the sensor to each vehicle using a surveyed topographic map of the terrain. Performing self‐registration using a map generated by the sensor itself is also demonstrated, however specific vehicle trajectory conditions are required to provide information on all registration parameters.
The capability to estimate the pose of known geometry from point cloud data is a frequently arising requirement in robotics and automation applications. This problem is directly addressed by Iterative Closest Point (ICP), however, this method has several limitations and lacks robustness. This paper makes the case for an alternative method that seeks to find the most likely solution based on available evidence. Specifically, an evidence-based metric is described that seeks to find the pose of the object that would maximise the conditional likelihood of reproducing the observed range measurements. A seedless search heuristic is also provided to find the most likely pose estimate in light of these measurements. The method is demonstrated to provide for pose estimation (2D and 3D shape poses as well as joint-space searches), object identification/classification, and platform localisation. Furthermore, the method is shown to be robust in cluttered or non-segmented point cloud data as well as being robust to measurement uncertainty and extrinsic sensor calibration.
This paper addresses the problem of estimating object pose from high-density LiDAR measurements in unpredictable field robotic environments. Point-cloud measurements collected in such environments do not lend themselves to providing an initial estimate or systematic segmentation of the point-cloud. A novel approach is presented that evaluates measurements individually for the evidence they provide to a collection of pose hypotheses. A maximum evidence strategy is constructed that is based in the idea that the most likely pose must be that which is most consistent with the observed LiDAR range measurements. This evidence-based approach is shown to handle the diversity of range measurements without an initial estimate or segmentation. The method is robust to dust. The approach is demonstrated by two pose estimation problems associated with the automation of a large mining excavator. K E Y W O R D S lidar, mining automation, perception, pose estimation, sensors 1992), and many have pursued using similar metrics. Blais, Beraldin, El-Hakim, and Cournoyer (2000) minimize the quadratic error J Field Robotics. 2018;35:921-936.
Registration, also know as extrinsic calibration, is the process of determining the position and orientation of a sensor relative to a known frame of reference. For ranging sensors such as light detection and ranging (LiDAR) used in field robotic applications, the quality of the registration determines the utility of the range measurements. This paper makes two contributions. The first is the introduction of a new method, termed maximum sum of evidence (MSoE) for registering three‐dimensional LiDAR sensors to moving platforms. This method is shown to produce more accurate registration solutions than two leading methods for these sensors, the adaptive structure registration filter (ASRF) and Rényi quadratic entropy (RQE). The second contribution of the paper is to study the accuracy of the MSoE registration against these two other approaches. One of these, like the MSoE, requires a truth model of the environment. The other, a model‐free method, seeks the registration that minimizes the RQE of a compound point cloud. The main finding of this investigation is that while the model‐based methods prove more accurate than the model‐free approach, the results of all three methods are fit for their intended field robotic applications. This leads us to conclude that registration based on RQE is preferable in many, if not all, field robotic applications for reasons of convenience, since a truth model of the environment is not required.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.