A spectacular measurement campaign was carried out on a real-world motorway stretch of Hungary with the participation of international industrial and academic partners. The measurement resulted in vehicle based and infrastructure based sensor data that will be extremely useful for future automotive R&D activities due to the available ground truth for static and dynamic content. The aim of the measurement campaign was twofold. On the one hand, road geometry was mapped with high precision in order to build Ultra High Definition (UHD) map of the test road. On the other hand, the vehicles—equipped with differential Global Navigation Satellite Systems (GNSS) for ground truth localization—carried out special test scenarios while collecting detailed data using different sensors. All of the test runs were recorded by both vehicles and infrastructure. The paper also showcases application examples to demonstrate the viability of the collected data having access to the ground truth labeling. This data set may support a large variety of solutions, for the test and validation of different kinds of approaches and techniques. As a complementary task, the available 5G network was monitored and tested under different radio conditions to investigate the latency results for different measurement scenarios. A part of the measured data has been shared openly, such that interested automotive and academic parties may use it for their own purposes.
In recent years, verification and validation processes of automated driving systems have been increasingly moved to virtual simulation, as this allows for rapid prototyping and the use of a multitude of testing scenarios compared to on-road testing. However, in order to support future approval procedures for automated driving functions with virtual simulations, the models used for this purpose must be sufficiently accurate to be able to test the driving functions implemented in the complete vehicle model. In recent years, the modelling of environment sensor technology has gained particular interest, since it can be used to validate the object detection and fusion algorithms in Model-in-the-Loop testing. In this paper, a practical process is developed to enable a systematic evaluation for perception–sensor models on a low-level data basis. The validation framework includes, first, the execution of test drive runs on a closed highway; secondly, the re-simulation of these test drives in a precise digital twin; and thirdly, the comparison of measured and simulated perception sensor output with statistical metrics. To demonstrate the practical feasibility, a commercial radar-sensor model (the ray-tracing based RSI radar model from IPG) was validated using a real radar sensor (ARS-308 radar sensor from Continental). The simulation was set up in the simulation environment IPG CarMaker® 8.1.1, and the evaluation was then performed using the software package Mathworks MATLAB®. Real and virtual sensor output data on a low-level data basis were used, which thus enables the benchmark. We developed metrics for the evaluation, and these were quantified using statistical analysis.
Human Interaction with mobile devices has recently been established as application field in eye tracking research. Current technologies for gaze recovery on mobile displays cannot enable fully natural interaction with the mobile device: users are conditioned to interact with tightly mounted displays or distracted by markers in their view. We propose a novel approach that captures point-of-regards (PORs) with eye tracking glasses (ETG) and then uses computer vision methodology for the robust localization of the smartphone in the head camera video. We present an integrated software package, i.e., the Smartphone Eye Tracking Toolbox (SMET) that enables accurate gaze recovery on mobile displays with heat mapping of recent attention. We report the performance of the computer vision approach and demonstrate it with various natural interaction scenarios using the SMET Toolbox, enable ROI settings on the mobile display and show results from eye movement analysis, such as, ROI dwell time and statistics on eye gaze event (saccades, fixations).
We describe a system which proposes a solution for multisensor object awareness and positioning to enable stable location awareness for a mobile service in urban areas. The system offers technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with a GPRS or UMTS capable camera-phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history might be explored by a mobile user who is intending to learn within the urban environment. Ambient learning is in this way achieved by pointing the device towards an urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the user's current field of view. The described mobile system offers multiple opportunities for application in both mobile business and commerce, and is currently developed as an industrial prototype.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.