The interest in developing and deploying fully autonomous vehicles on our public roads has come to a full swing. Driverless capabilities, widely spread in modern vehicles through advanced driver-assistance systems (ADAS), require highly reliable perception features to navigate the environment, being light detection and ranging (LiDAR) sensors a key instrument in detecting the distance and speed of nearby obstacles and in providing high-resolution 3D representations of the surroundings in real-time. However, and despite being assumed as a game-changer in the autonomous driving paradigm, LiDAR sensors can be very sensitive to adverse weather conditions, which can severely affect the vehicle's perception system behavior. Aiming at improving the LiDAR operation in challenging weather conditions, which contributes to achieving higher driving automation levels defined by the Society of Automotive Engineers (SAE), this article proposes a weather denoising method called Dynamic light-Intensity Outlier Removal (DIOR). DIOR combines two approaches of the state-of-the-art, the dynamic radius outlier removal (DROR) and the low-intensity outlier removal (LIOR) algorithms, supported by an embedded reconfigurable hardware platform. By resorting to field-programmable gate array (FPGA) technology, DIOR can outperform state-of-the-art outlier removal solutions, achieving better accuracy and performance while guaranteeing the real-time requirements.
The automotive industry is facing an unprecedented technological transformation towards fully autonomous vehicles. Optimists predict that, by 2030, cars will be sufficiently reliable, affordable, and common to displace most current human driving tasks. To cope with these trends, autonomous vehicles require reliable perception systems to hear and see all the surroundings, being light detection and ranging (LiDAR) sensors a key instrument for recreating a 3D visualization of the world. However, for a reliable operation, such systems require LiDAR sensors to provide high-resolution 3D representations of the car's vicinity, which results in millions of data points to be processed in real-time. With this article we propose the ALFA-Pi, a data packet decoder and reconstruction system fully deployed on an embedded reconfigurable hardware platform. By resorting to field-programmable gate array (FPGA) technology, ALFA-Pi is able to interface different LiDAR sensors at the same time, while providing custom representation outputs to high-level perception systems. By accelerating the LiDAR interface, the proposed system outperforms current software-only approaches, achieving lower latency in the data acquisition and data decoding tasks while reaching high performance ratios.
In the near future, autonomous vehicles with full self-driving features will populate our public roads. However, fully autonomous cars will require robust perception systems to safely navigate the environment, which includes cameras, RADAR devices, and Light Detection and Ranging (LiDAR) sensors. LiDAR is currently a key sensor for the future of autonomous driving since it can read the vehicle’s vicinity and provide a real-time 3D visualization of the surroundings through a point cloud representation. These features can assist the autonomous vehicle in several tasks, such as object identification and obstacle avoidance, accurate speed and distance measurements, road navigation, and more. However, it is crucial to detect the ground plane and road limits to safely navigate the environment, which requires extracting information from the point cloud to accurately detect common road boundaries. This article presents a survey of existing methods used to detect and extract ground points from LiDAR point clouds. It summarizes the already extensive literature and proposes a comprehensive taxonomy to help understand the current ground segmentation methods that can be used in automotive LiDAR sensors.
The world is facing a great technological transformation towards fully autonomous vehicles, where optimists predict that by 2030 autonomous vehicles will be sufficiently reliable, affordable, and common to displace most human driving. To cope with these trends, reliable perception systems must enable vehicles to hear and see all their surroundings, with light detection and ranging (LiDAR) sensors being a key instrument for recreating a 3D visualization of the world in real time. However, perception systems must rely on accurate measurements of the environment. Thus, these intelligent sensors must be calibrated and benchmarked before being placed on the market or assembled in a car. This article presents an Evaluation and Testing Platform for Automotive LiDAR sensors, with the main goal of testing both commercially available sensors and new sensor prototypes currently under development in Bosch Car Multimedia Portugal. The testing system can benchmark any LiDAR sensor under different conditions, recreating the expected driving environment in which such devices normally operate. To characterize and validate the sensor under test, the platform evaluates several parameters, such as the field of view (FoV), angular resolution, sensor’s range, etc., based only on the point cloud output. This project is the result of a partnership between the University of Minho and Bosch Car Multimedia Portugal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.