Plane extraction is regarded as a necessary function that supports judgment basis in many applications, including semantic digital map reconstruction and path planning for unmanned ground vehicles. Owing to the heterogeneous density and unstructured spatial distribution of three-dimensional (3D) point clouds collected by light detection and ranging (LiDAR), plane extraction from it is recently a significant challenge. This paper proposed a parallel 3D Hough transform algorithm to realize rapid and precise plane detection from 3D LiDAR point clouds. After transforming all the 3D points from a Cartesian coordinate system to a pre-defined 3D Hough space, the generated Hough space is rasterised into a series of arranged cells to store the resided point counts into individual cells. A 3D connected component labeling algorithm is developed to cluster the cells with high values in Hough space into several clusters. The peaks from these clusters are extracted so that the targeting planar surfaces are obtained in polar coordinates. Because the laser beams emitted by LiDAR sensor holds several fixed angles, the collected 3D point clouds distribute as several horizontal and parallel circles in plane surfaces. This kind of horizontal and parallel circles mislead plane detecting results from horizontal wall surfaces to parallel planes. For detecting accurate plane parameters, this paper adopts a fraction-to-fraction method to gradually transform raw point clouds into a series of sub Hough space buffers. In our proposed planar detection algorithm, a graphic processing unit (GPU) programming technology is applied to speed up the calculation of 3D Hough space updating and peaks searching.
Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs) owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS), a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images, respectively.
Unmanned aerial vehicles (UAVs) with auto-pilot capabilities are often used for surveillance and patrol. Pilots set the flight points on a map in order to navigate to the imaging point where surveillance or patrolling is required. However, there is the limit denoting the information such as absolute altitudes and angles. Therefore, it is required to set the information accurately. This paper hereby proposes a method to construct environmental symmetric big data using an unmanned aerial vehicle (UAV) during flight by designating the imaging and non-imaging points for surveillance and patrols. The K-Means-based algorithm proposed in this paper is then employed to divide the imaging points, which is set by the pilot, into K clusters, and K imaging points are determined using these clusters. Flight data are then used to set the points to which the UAV will fly. In our experiment, flight records were gathered through an UAV in order to monitor a stadium and the imaging and non-imaging points were set using the proposed method and compared with the points determined by a traditional K-Means algorithm. Through the proposed method, the cluster centroids and cumulative distance of its members were reduced by 87.57% more than with the traditional K-Means algorithm. With the traditional K-Means algorithm, imaging points were not created in the five points desired by the pilot, and two incorrect points were obtained. However, with the proposed method, two incorrect imaging points were obtained. Due to these two incorrect imaging points, the two points desired by the pilot were not generated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.