Using multiple sensors in the context of environment perception for autonomous vehicles is quite common these days. Perceived data from these sensors can be fused at different levels like: before object detection, after object detection and finally after tracking the moving objects. In this paper we detail our object detection level fusion between laser and stereo vision sensors as opposed to pre-detection or track level fusion. We use the output of our laser processing to get a list of objects with position and dynamic properties for each object. Similarly we use the stereo vision output of another team which consists of a list of detected objects with position and classification properties for each object. We use Bayesian fusion technique on objects of these two lists to get a new list of fused objects. This fused list of objects is further used in tracking phase to track moving objects in an intersection like scenario. The results obtained on data sets of INTERSAFE-2 demonstrator vehicle show that this fusion has improved data association and track management steps.
Abstract-In this paper, we describe our approach for intersection safety developed in the scope of the European project INTERSAFE-2. A complete solution for the safety problem including the tasks of perception and risk assessment using on-board lidar and stereo-vision sensors will be presented and interesting results are shown. I. INTRODUCTIONAbout 30% to 60% (depending on the country) of all injury accidents and about 16% to 36% of the fatalities are intersection related. In addition, accident scenarios at intersections are amongst the most complex (different type of road users, various orientations and speeds).The INTERSAFE-2 project 1 aims to develop and demonstrate a Cooperative Intersection Safety System (CISS) that is able to significantly reduce injury and fatal accidents at intersections. Vehicles equipped with communication means and onboard sensor systems cooperate with the road side infrastructure in order to achieve a comprehensive system that contributes to the EU-25 and zero accident vision as well as to a significant improvement of efficiency in traffic flow and thus reduce fuel consumption in urban areas. By networking state-of-the-art technologies for sensors, infrastructure systems, communications, digital map contents and new accurate positioning techniques, INTERSAFE-2 aims to bring Intersection Safety Systems much closer to market introduction. This paper aims to detail the technical solution developed on the Volkswagen demonstrator of the project. This solution takes as inputs raw data from a lidar and a stereo-vision system and delivers as an output a level of risk between the host vehicle and other entities present at the intersection. This paper is a joint paper between: INRIA Rocquencourt (France), Technical University of Cluj (Romania) and University of Grenoble1 (France).The rest of the paper is organized as follows. In the next section, we present the demonstrator used for this work and sensors installed on it. We summarize the software architecture in section III. In sections IV & V we present the sensor processing of lidar and stereo-vision. In section VI and VII, we detail our work on fusion and tracking. The Risk 1 http://www.intersafe-2.eu II. EXPERIMENTAL SETUPThe demonstrator vehicle used to get datasets for this work has multiple sensors installed on it. It has a long range laser scanner with a field of view of 160• and a maximum range of 150m. Other sensors installed on this demonstrator include a stereo vision camera, four short range radars (SRR) one at each corner of the vehicle and a long range radar (LRR) in front of the vehicle (Figure 1). Our work presented in this paper is only concerned with the processing and fusion of lidar and stereo vision data. Figure 2 illustrates the software architecture of the system. This architecture is composed of 5 modules: III. SOFTWARE ARCHITECTURE1) The lidar data processing module which takes as input the raw data provided by the laser scanner and delivers (i) an estimation of the position of the host vehicle in the intersection an...
International audienceIn this paper, we present a real-time algorithm for online simultaneous localization and mapping (SLAM) with detection and tracking of moving objects (DATMO) in dynamic outdoor environments from a moving vehicle equipped with laser sensor and odometry. To correct vehicle location from odometry we introduce a new fast implementation of incremental scan matching method that can work reliably in dynamic outdoor environments. After a good vehicle location is estimated, the surrounding map is updated incrementally and moving objects are detected without a priori knowledge of the targets. Detected moving objects are finally tracked using Global Nearest Neighborhood (GNN) method. The experimental results on dataset collected from INTERSAFE-2 demonstrator for typical scenario show the effectiveness of this technique
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.