Abstract-In this paper an approach for 360 degree multi sensor fusion for static and dynamic obstacles is presented. The perception of static and dynamic obstacles is achieved by combining the advantages of model based object tracking and an occupancy map. For the model based object tracking a novel multi reference point tracking system, called best knowledge model, is introduced. The best knowledge model allows to track and describe objects with respect to a best suitable reference point. It is explained how the object tracking and the occupancy map closely interact and benefit from each other. Experimental results of the 360 degree multi sensor fusion system from an automotive test vehicle are shown.
Abstract-Future Advanced Driver Assistance Systems (ADAS) require detailed information about occupancy states in the vehicle's local environment. In contrast to widespread occupancy grids, this information should be represented in a compact, scalable and easy-to-interpret data structure. In this paper, we show how occupancy probabilities can efficiently be represented in our 2D Interval Map framework. The basic idea of this approach is to discretize the vehicle's environment only in longitudinal direction and to avoid quantization errors in lateral direction by storing continuous values. In order to correctly deal with dynamic obstacles in ADAS scenarios, the map also interacts with a model based object tracking.The comparison of our experimental results to a ground truth illustrates the differences of grid and interval based environment representations. A tested collision avoidance function yields similar results for both representations, while computation times and memory requirements are substantially improved by the application of the 2D Interval Map.
Autonomous vehicles demand detailed maps to maneuver reliably through traffic, which need to be kept upto-date to ensure a safe operation. A promising way to adapt the maps to the ever-changing road-network is to use crowdsourced data from a fleet of vehicles. In this work, we present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment including drivable area, lane markings, poles, obstacles and more as a 3D mesh. Each vehicle contributes locally reconstructed submaps as lightweight meshes, making our method applicable to a wide range of reconstruction methods and sensor modalities. Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field, which is supervised using the submap meshes to predict a fused environment representation. We leverage memory-efficient sparse featuregrids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction. Our approach is evaluated on two datasets with different local mapping methods, showing improved pose alignment and reconstruction over existing methods. Additionally, we demonstrate the benefit of multi-session mapping and examine the required amount of data to enable high-fidelity map learning for autonomous vehicles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.