In Europe, a considerable part of lives lost in traffic accidents is due to inappropriate vehicle speed or headway. Excessive speed is one of the major causes of accidents on European roads, responsible for one-third of all road accidents. SASPENCE, as part of the EU founded integrated project PReVENT, is developing and evaluating an innovative system able to perform the reliable and comfortable Safe Speed and Safe Distance concept, which helps the driver to avoid dangerous situations related to excessive speed or too small headway.In this paper, the high precise reconstruction of the road geometry is discussed. This information about the road geometry is needed in the project to be able to calculate the risk of a given scenario and given speed. A sensor fusion method is proposed to combine information from a vision sensor, a radar system, digital maps, a GPS sensor and odometric sensors to one common description of the road geometry in front of the vehicle. Thereby, the sensor information is used to localize the own vehicle relative to the map data. The system has been tested using real data and the results are shown in the paper.
One of the key tasks for autonomous vehicles or robots is a robust perception of their 3D environment, which is why autonomous vehicles or robots are equipped with a wide range of different sensors. Building upon a robust sensor setup, understanding and interpreting their 3D environment is the next important step. Semantic segmentation of 3D sensor data, e.g. point clouds, provides valuable information for this task and is often seen as key enabler for 3D scene understanding. This work presents an iterative deep fusion architecture for semantic segmentation of 3D point clouds, which builds upon a range image representation of the point clouds and additionally exploits camera features to increase accuracy and robustness. In contrast to other approaches, which fuse lidar and camera features once, the proposed fusion strategy iteratively combines and refines lidar and camera features at different scales inside the network architecture. Additionally, the proposed approach can deal with camera failure as well as jointly predict lidar and camera segmentation. We demonstrate the benefits of the presented iterative deep fusion approach on two challenging datasets, outperforming all range image-based lidar and fusion approaches. An in-depth evaluation underlines the effectiveness of the proposed fusion strategy and the potential of camera features for 3D semantic segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.