Commercialization of autonomous vehicle technology is a major goal of the automotive industry, thus research in this space is rapidly expanding across the world. However, despite this high level of research activity, literature detailing a straightforward and cost-effective approach to the development of an AV research platform is sparse. To address this need, we present the methodology and results regarding the AV instrumentation and controls of a 2019 Kia Niro which was developed for a local AV pilot program. This platform includes a drive-by-wire actuation kit, Aptiv electronically scanning radar, stereo camera, MobilEye computer vision system, LiDAR, inertial measurement unit, two global positioning system receivers to provide heading information, and an in-vehicle computer for driving environment perception and path planning. Robotic Operating System software is used as the system middleware between the instruments and the autonomous application algorithms. After selection, installation, and integration of these components, our results show successful utilization of all sensors, drive-by-wire functionality, a total additional power* consumption of 242.8 Watts (*Typical), and an overall cost of $118,189 USD, which is a significant saving compared to other commercially available systems with similar functionality. This vehicle continues to serve as our primary AV research and development platform.
<div class="section abstract"><div class="htmlview paragraph">Today’s Advanced Driver Assistance Systems (ADAS) predominantly utilize cameras to increase driver and passenger safety. Computer vision, as the enabler of this technology, extracts two key environmental features: the drivable region and surrounding objects (e.g., vehicles, pedestrians, bicycles). Lane lines are the most common characteristic extracted for drivable region detection, which is the core perception task enabling ADAS features such as lane departure warnings, lane-keeping assistance, and lane-centering. However, when subject to adverse weather conditions (e.g., occluded lane lines) the lane line detection algorithms are no longer operational. This prevents the ADAS feature from providing the benefit of increased safety to the driver. The performance of one of the leading computer vision system providers was tested in conditions of variable snow coverage and lane line occlusion during the 2020-2021 winter in Kalamazoo, Michigan. The results show that this computer vision system was only able to provide high confidence detections in less than 1% of all frames recorded. This is an alarming result, as 21% of all crashes in the U.S. are weather-related. To increase the capabilities of ADAS when snow-occlusions are present, a tire track identification system was developed by comparing various supervised machine learning models. A custom dataset was collected using the Energy Efficient and Autonomous Vehicles lab’s research platform from Western Michigan University. A data preparation pipeline was implemented to label tire tracks and train the machine learning models. The best model achieved high confidence detections of tire tracks in 83% of all frames of which tire tracks were present, an 82% increase in detections than the leading computer vision system provider.</div></div>
<div class="section abstract"><div class="htmlview paragraph">Perception in adverse weather conditions is one of the most prominent challenges for automated driving features. The sensors used for mid-to-long range perception most impacted by weather (i.e., camera and LiDAR) are susceptible to data degradation, causing potential system failures. This research series aims to better understand sensor data degradation characteristics in real-world, dynamic environmental conditions, focusing on adverse weather. To achieve this, a dataset containing LiDAR (Velodyne VLP-16) and camera (Mako G-507) data was gathered under static scenarios using a single vehicle target to quantify the sensor detection performance. The relative position between the sensors and the target vehicle varied longitudinally and laterally. The longitudinal position was varied from 10m to 175m at 25m increments and the lateral position was adjusted by moving the sensor set angle between 0 degrees (left position), 4.5 degrees (center position), and 9 degrees (right position). The tests were conducted on three days, one day representing the following weather conditions: clear, rain, and snow. The LiDAR performance was evaluated by comparing the return point count and return point power intensity from the target vehicle. The camera performance was quantified using a YOLOv5 model to perform object detection inference, tracking the detection confidence, inaccurate classification count (type I error), and misclassification count (type II error) of the target vehicle. Overall, LiDAR showed power intensity reduction by 22.42% and 29.30% in rain and snow, respectively, while camera confidence results were not impacted by the mild weather conditions.</div></div>
<div class="section abstract"><div class="htmlview paragraph">Modern vehicles use automated driving assistance systems (ADAS) products to automate certain aspects of driving, which improves operational safety. In the U.S. in 2020, 38,824 fatalities occurred due to automotive accidents, and typically about 25% of these are associated with inclement weather. ADAS features have been shown to reduce potential collisions by up to 21%, thus reducing overall accidents. But ADAS typically utilize camera sensors that rely on lane visibility and the absence of obstructions in order to function, rendering them ineffective in inclement weather. To address this research gap, we propose a new technique to estimate snow coverage so that existing and new ADAS features can be used during inclement weather. In this study, we use a single camera sensor and historical weather data to estimate snow coverage on the road. Camera data was collected over 6 miles of arterial roadways in Kalamazoo, MI. Additionally, infrastructure-based weather sensor visibility data from an Automated Surface Observing System (ASOS) station was collected. Supervised Machine Learning (ML) models were developed to determine the categories of snow coverage using different features from the images and ASOS data. The output from the best-performing model resulted in an accuracy of 98.8% for categorizing the instances as either none, standard, or heavy snow coverage. These categories are essential for the future development of ADAS products designed to detect drivable regions in varying degrees of snow coverage such as clear weather (the none condition) and our ongoing work in tire track detection (the standard category). Overall this research demonstrates that purpose-built computer vision algorithms are capable of enabling ADAS to function in inclement weather, widening their operational design domain (ODD) and thus lowering the annual weather-related fatalities.</div></div>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.