<div class="section abstract"><div class="htmlview paragraph">Today’s Advanced Driver Assistance Systems (ADAS) predominantly utilize cameras to increase driver and passenger safety. Computer vision, as the enabler of this technology, extracts two key environmental features: the drivable region and surrounding objects (e.g., vehicles, pedestrians, bicycles). Lane lines are the most common characteristic extracted for drivable region detection, which is the core perception task enabling ADAS features such as lane departure warnings, lane-keeping assistance, and lane-centering. However, when subject to adverse weather conditions (e.g., occluded lane lines) the lane line detection algorithms are no longer operational. This prevents the ADAS feature from providing the benefit of increased safety to the driver. The performance of one of the leading computer vision system providers was tested in conditions of variable snow coverage and lane line occlusion during the 2020-2021 winter in Kalamazoo, Michigan. The results show that this computer vision system was only able to provide high confidence detections in less than 1% of all frames recorded. This is an alarming result, as 21% of all crashes in the U.S. are weather-related. To increase the capabilities of ADAS when snow-occlusions are present, a tire track identification system was developed by comparing various supervised machine learning models. A custom dataset was collected using the Energy Efficient and Autonomous Vehicles lab’s research platform from Western Michigan University. A data preparation pipeline was implemented to label tire tracks and train the machine learning models. The best model achieved high confidence detections of tire tracks in 83% of all frames of which tire tracks were present, an 82% increase in detections than the leading computer vision system provider.</div></div>
<div class="section abstract"><div class="htmlview paragraph">Contemporary ADS and ADAS localization technology utilizes real-time perception sensors such as visible light cameras, radar sensors, and lidar sensors, greatly improving transportation safety in sufficiently clear environmental conditions. However, when lane lines are completely occluded, the reliability of on-board automated perception systems breaks down, and vehicle control must be returned to the human driver. This limits the operational design domain of automated vehicles significantly, as occlusion can be caused by shadows, leaves, or snow, which all occur in many regions. High-definition map data, which contains a high level of detail about road features, is an alternative source of the required lane line information. This study details a novel method where high-definition map data are processed to locate fully occluded lane lines, allowing for automated path planning in scenarios where it would otherwise be impossible. A proxy high-definition map dataset with high-accuracy lane line geospatial positions was generated for routes at both the Eaton Proving Grounds and Campus Drive at Western Michigan University (WMU). Once map data was collected for both routes, the WMU Energy Efficient and Autonomous Vehicles Laboratory research vehicles were used to collect video and high-accuracy GNSS data. The map data and GNSS data were fused together using a sequence of data processing and transformation techniques to provide occluded lane line geometry from the perspective of the ego vehicle camera system. The recovered geometry is then overlaid on the video feed to provide lane lines, even when they are completely occluded and invisible to the camera. This enables the control system to utilize the projected lane lines for path planning, rather than failing due to undetected, occluded lane lines. This initial study shows that utilization of technology outside of the norms of automated vehicle perception successfully expands the operational design domain to include occluded lane lines, a necessary and critical step for the achievement of complete vehicle autonomy.</div></div>
<div class="section abstract"><div class="htmlview paragraph">Practical applications of recently developed sensor fusion algorithms perform poorly in the real world due to a lack of proper evaluation during development. Existing evaluation metrics do not properly address a wide variety of testing scenarios. This issue can be addressed using proactive performance measurements such as the tools of resilience engineering theory rather than reactive performance measurements such as root mean square error. Resilience engineering is an established discipline for evaluating proactive performance on complex socio-technical systems which has been underutilized for automated vehicle development and evaluation. In this study, we use resilience engineering metrics to assess the performance of a sensor fusion algorithm for vehicle localization. A Kalman Filter is used to fuse GPS, IMU and LiDAR data for vehicle localization in the CARLA simulator. This vehicle localization algorithm was then evaluated using resilience engineering metrics in the simulated multipath and overpass scenario. These scenarios were developed in the CARLA simulator by collecting real-world data in an overpass and multipath scenario using WMU’s research vehicle. The absorptive, adaptative, restorative capacities, and the overall resilience of the system was assessed by using the resilience triangle. Simulation results indicate that the vehicle localization pipeline possesses a higher quantitative resilience when encountering overpass scenarios. Nevertheless, the system obtained a higher adaptive capacity when encountering multipath scenarios. These resilience engineering metrics show that the fusion systems recover faster when encountering disturbances due to signal interference in overpasses and that the system is in a disturbed state for a shorter duration in multipath scenarios. Overall these results demonstrate that resilience engineering metrics provide valuable insights regarding complicated systems such as automated vehicle localization. In future work, the insights from resilience engineering can be used to improve the design and thus performance of future localization algorithms.</div></div>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.