Mapping the environment of a vehicle and localizing a vehicle within that unknown environment are complex issues. Although many approaches based on various types of sensory inputs and computational concepts have been successfully utilized for ground robot localization, there is difficulty in localizing an unmanned aerial vehicle (UAV) due to variation in altitude and motion dynamics. This paper proposes a robust and efficient indoor mapping and localization solution for a UAV integrated with low-cost Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) sensors. Considering the advantage of the typical geometric structure of indoor environments, the planar position of UAVs can be efficiently calculated from a point-to-point scan matching algorithm using measurements from a horizontally scanning primary LiDAR. The altitude of the UAV with respect to the floor can be estimated accurately using a vertically scanning secondary LiDAR scanner, which is mounted orthogonally to the primary LiDAR. Furthermore, a Kalman filter is used to derive the 3D position by fusing primary and secondary LiDAR data. Additionally, this work presents a novel method for its application in the real-time classification of a pipeline in an indoor map by integrating the proposed navigation approach. Classification of the pipeline is based on the pipe radius estimation considering the region of interest (ROI) and the typical angle. The ROI is selected by finding the nearest neighbors of the selected seed point in the pipeline point cloud, and the typical angle is estimated with the directional histogram. Experimental results are provided to determine the feasibility of the proposed navigation system and its integration with real-time application in industrial plant engineering.
The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation, and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the detection of objects at short and long distances. As both the sensors are capable of capturing the different attributes of the environment simultaneously, the integration of those attributes with an efficient fusion approach greatly benefits the reliable and consistent perception of the environment. This paper presents a method to estimate the distance (depth) between a self-driving car and other vehicles, objects, and signboards on its path using the accurate fusion approach. Based on the geometrical transformation and projection, low-level sensor fusion was performed between a camera and LiDAR using a 3D marker. Further, the fusion information is utilized to estimate the distance of objects detected by the RefineDet detector. Finally, the accuracy and performance of the sensor fusion and distance estimation approach were evaluated in terms of quantitative and qualitative analysis by considering real road and simulation environment scenarios. Thus the proposed low-level sensor fusion, based on the computational geometric transformation and projection for object distance estimation proves to be a promising solution for enabling reliable and consistent environment perception ability for autonomous vehicles.
Acquiring the three-dimensional point cloud data of a scene using a laser scanner and the alignment of the point cloud data within a real-time video environment view of a camera is a very new concept and is an efficient method for constructing, monitoring, and retrofitting complex engineering models in heavy industrial plants. This article presents a novel prototype framework for virtual retrofitting applications. The workflow includes an efficient 4-in-1 alignment, beginning with the coordination of pre-processed three-dimensional point cloud data using a partial point cloud from LiDAR and alignment of the pre-processed point cloud within the video scene using a frame-by-frame registering method. Finally, the proposed approach can be utilized in pre-retrofitting applications by pre-generated three-dimensional computeraided design models virtually retrofitted with the help of a synchronized point cloud, and a video scene is efficiently visualized using a wearable virtual reality device. The prototype method is demonstrated in a real-world setting, using the partial point cloud from LiDAR, pre-processed point cloud data, and video from a two-dimensional camera.
Augmented reality (AR) systems are becoming next-generation technologies to intelligently visualize the real world in 3D. This research proposes a sensor fusion based pipeline inspection and retrofitting for the AR system, which can be used in pipeline inspection and retrofitting processes in industrial plants. The proposed methodology utilizes a prebuilt 3D point cloud data of the environment, real-time Light Detection and Ranging (LiDAR) scan and image sequence from the camera. First, we estimate the current pose of the sensors platform by matching the LiDAR scan and the prebuilt point cloud data from the current pose prebuilt point cloud data augmented on to the camera image by utilizing the LiDAR and camera calibration parameters. Next, based on the user selection in the augmented view, geometric parameters of a pipe are estimated. In addition to pipe parameter estimation, retrofitting in the existing plant using augmented scene are illustrated. Finally, step-by-step procedure of the proposed method was experimentally verified at a water treatment plant. Result shows that the integration of AR with building information modelling (BIM) greatly benefits the post-occupancy evaluation process or pre-retrofitting and renovation process for identifying, evaluating, and updating the geometric specifications of a construction environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.