IntroductionInternet of things (IoT) is growing fast in over the world [1][2][3][4][5][6][7]. In an IoT-based system for the autonomous vehicles, light detection and ranging (Lidar) sensors are often used to collect data of surrounding environments. Furthermore, in human-centric autonomous systems, robots also have several attached cameras and an inertial measurement unitglobal positioning system (IMU-GPS) sensor. In each frame, the Lidar sensor returns a point cloud that describes the terrain around the robot. The data from the Lidar sensor are transferred to a computer and split into two groups: ground and nonground. The first group includes ground points of terrain which a robot can traverse. On the other hand, the second group consists of nonground points which the robot cannot traverse such as cars, trees, walls, etc. If the terrain is sloping such that the autonomous robot cannot traverse it, the corresponding points are clustered into the nonground group. The segmentation of three-dimensional (3D) point cloud ground data is a fundamental AbstractGround segmentation is an important step for any autonomous and remote-controlled systems. After separating ground and nonground parts, many works such as object tracking and 3D reconstruction can be performed. In this paper, we propose an efficient method for segmenting the ground data of point clouds acquired from multi-channel Lidar sensors. The goal of this study is to completely separate ground points and nonground points in real time. The proposed method segments ground data efficiently and accurately in various environments such as flat terrain, undulating/ rugged terrain, and mountainous terrain. First, the point cloud in each obtained frame is divided into small groups. We then focus on the vertical and horizontal directions separately, before processing both directions concurrently. Experiments were conducted, and the results showed the effectiveness of the proposed ground segment method. For flat and sloping terrains, the accuracy is over than 90%. Besides, the quality of the proposed method is also over than 80% for bumpy terrains. On the other hand, the speed is 145 frames per second. Therefore, in both simple and complex terrains, we gained good results and real-time performance.
This paper proposes a cloud-based framework that optimizes the three-dimensional (3D) reconstruction of multiple types of sensor data captured from multiple remote robots. A working environment using multiple remote robots requires massive amounts of data processing in real-time, which cannot be achieved using a single computer. In the proposed framework, reconstruction is carried out in cloud-based servers via distributed data processing. Consequently, users do not need to consider computing resources even when utilizing multiple remote robots. The sensors' bulk data are transferred to a master server that divides the data and allocates the processing to a set of slave servers. Thus, the segmentation and reconstruction tasks are implemented in the slave servers. The reconstructed 3D space is created by fusing all the results in a visualization server, and the results are saved in a database that users can access and visualize in real-time. The results of the experiments conducted verify that the proposed system is capable of providing real-time 3D scenes of the surroundings of remote robots.
Three-dimensional (3D) point clouds are important for many applications, including object tracking and 3D scene reconstruction. Point clouds are usually obtained from laser scanners, but their high cost impedes the widespread adoption of this technology. We propose a method to generate the 3D point cloud corresponding to a single red-green-blue (RGB) image. The method retrieves high-quality 3D data from two-dimensional (2D) images captured by conventional cameras, which are generally less expensive. The proposed method comprises two stages. First, a generative adversarial network generates a depth image estimation from a single RGB image. Then, the 3D point cloud is calculated from the depth image. The estimation relies on the parameters of the depth camera employed to generate the training data. The experimental results verify that the proposed method provides high-quality 3D point clouds from single 2D images. Moreover, the method does not require a PC with outstanding computational resources, further reducing implementation costs, as only a moderate-capacity graphics processing unit can efficiently handle the calculations.INDEX TERMS Artificial intelligence, image processing, sensors, machine learning, neural networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.